Dec 12 18:27:23.277861 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:17:57 -00 2025 Dec 12 18:27:23.277901 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 12 18:27:23.277915 kernel: BIOS-provided physical RAM map: Dec 12 18:27:23.277926 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 12 18:27:23.277950 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 12 18:27:23.277962 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 12 18:27:23.277974 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Dec 12 18:27:23.277991 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Dec 12 18:27:23.278003 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 12 18:27:23.278014 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 12 18:27:23.278025 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 12 18:27:23.278036 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 12 18:27:23.278057 kernel: NX (Execute Disable) protection: active Dec 12 18:27:23.278069 kernel: APIC: Static calls initialized Dec 12 18:27:23.278082 kernel: SMBIOS 2.8 present. Dec 12 18:27:23.278095 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Dec 12 18:27:23.278107 kernel: DMI: Memory slots populated: 1/1 Dec 12 18:27:23.278130 kernel: Hypervisor detected: KVM Dec 12 18:27:23.278142 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 12 18:27:23.278154 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 12 18:27:23.279847 kernel: kvm-clock: using sched offset of 5078910780 cycles Dec 12 18:27:23.279862 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 12 18:27:23.279875 kernel: tsc: Detected 2499.998 MHz processor Dec 12 18:27:23.279888 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 12 18:27:23.279902 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 12 18:27:23.279933 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 12 18:27:23.279946 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 12 18:27:23.279959 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 12 18:27:23.279971 kernel: Using GB pages for direct mapping Dec 12 18:27:23.279983 kernel: ACPI: Early table checksum verification disabled Dec 12 18:27:23.279995 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Dec 12 18:27:23.280008 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:27:23.280021 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:27:23.280047 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:27:23.280060 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Dec 12 18:27:23.280072 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:27:23.280085 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:27:23.280097 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:27:23.280110 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:27:23.280133 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Dec 12 18:27:23.280183 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Dec 12 18:27:23.280200 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Dec 12 18:27:23.280224 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Dec 12 18:27:23.280238 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Dec 12 18:27:23.280258 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Dec 12 18:27:23.280271 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Dec 12 18:27:23.280284 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 12 18:27:23.280296 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 12 18:27:23.280310 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Dec 12 18:27:23.280323 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Dec 12 18:27:23.280336 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Dec 12 18:27:23.280354 kernel: Zone ranges: Dec 12 18:27:23.280367 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 12 18:27:23.280380 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Dec 12 18:27:23.280393 kernel: Normal empty Dec 12 18:27:23.280405 kernel: Device empty Dec 12 18:27:23.280418 kernel: Movable zone start for each node Dec 12 18:27:23.280431 kernel: Early memory node ranges Dec 12 18:27:23.280444 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 12 18:27:23.280461 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Dec 12 18:27:23.280474 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Dec 12 18:27:23.280487 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 12 18:27:23.280500 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 12 18:27:23.280513 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Dec 12 18:27:23.280525 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 12 18:27:23.280545 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 12 18:27:23.280569 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 12 18:27:23.280583 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 12 18:27:23.280596 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 12 18:27:23.280609 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 12 18:27:23.280622 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 12 18:27:23.280635 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 12 18:27:23.280648 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 12 18:27:23.280672 kernel: TSC deadline timer available Dec 12 18:27:23.280685 kernel: CPU topo: Max. logical packages: 16 Dec 12 18:27:23.280698 kernel: CPU topo: Max. logical dies: 16 Dec 12 18:27:23.280711 kernel: CPU topo: Max. dies per package: 1 Dec 12 18:27:23.280723 kernel: CPU topo: Max. threads per core: 1 Dec 12 18:27:23.280736 kernel: CPU topo: Num. cores per package: 1 Dec 12 18:27:23.280749 kernel: CPU topo: Num. threads per package: 1 Dec 12 18:27:23.280762 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Dec 12 18:27:23.280791 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 12 18:27:23.280805 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 12 18:27:23.280818 kernel: Booting paravirtualized kernel on KVM Dec 12 18:27:23.280831 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 12 18:27:23.281065 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Dec 12 18:27:23.281079 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Dec 12 18:27:23.281093 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Dec 12 18:27:23.281123 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 12 18:27:23.281136 kernel: kvm-guest: PV spinlocks enabled Dec 12 18:27:23.281150 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 12 18:27:23.281182 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 12 18:27:23.281197 kernel: random: crng init done Dec 12 18:27:23.281222 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 18:27:23.281235 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 12 18:27:23.281273 kernel: Fallback order for Node 0: 0 Dec 12 18:27:23.281288 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Dec 12 18:27:23.281301 kernel: Policy zone: DMA32 Dec 12 18:27:23.283193 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 18:27:23.283227 kernel: software IO TLB: area num 16. Dec 12 18:27:23.283242 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 12 18:27:23.283255 kernel: Kernel/User page tables isolation: enabled Dec 12 18:27:23.283276 kernel: ftrace: allocating 40103 entries in 157 pages Dec 12 18:27:23.283290 kernel: ftrace: allocated 157 pages with 5 groups Dec 12 18:27:23.283303 kernel: Dynamic Preempt: voluntary Dec 12 18:27:23.283316 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 18:27:23.283330 kernel: rcu: RCU event tracing is enabled. Dec 12 18:27:23.283343 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 12 18:27:23.283356 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 18:27:23.283374 kernel: Rude variant of Tasks RCU enabled. Dec 12 18:27:23.283388 kernel: Tracing variant of Tasks RCU enabled. Dec 12 18:27:23.283401 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 18:27:23.283414 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 12 18:27:23.283427 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 12 18:27:23.283440 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 12 18:27:23.283454 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 12 18:27:23.283466 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Dec 12 18:27:23.283484 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 18:27:23.283523 kernel: Console: colour VGA+ 80x25 Dec 12 18:27:23.283547 kernel: printk: legacy console [tty0] enabled Dec 12 18:27:23.283561 kernel: printk: legacy console [ttyS0] enabled Dec 12 18:27:23.283579 kernel: ACPI: Core revision 20240827 Dec 12 18:27:23.283594 kernel: APIC: Switch to symmetric I/O mode setup Dec 12 18:27:23.283607 kernel: x2apic enabled Dec 12 18:27:23.283621 kernel: APIC: Switched APIC routing to: physical x2apic Dec 12 18:27:23.283635 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 12 18:27:23.283659 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Dec 12 18:27:23.283674 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 12 18:27:23.283688 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 12 18:27:23.283701 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 12 18:27:23.283724 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 12 18:27:23.283738 kernel: Spectre V2 : Mitigation: Retpolines Dec 12 18:27:23.283751 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 12 18:27:23.283765 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 12 18:27:23.283778 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 12 18:27:23.283791 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 12 18:27:23.283804 kernel: MDS: Mitigation: Clear CPU buffers Dec 12 18:27:23.283817 kernel: MMIO Stale Data: Unknown: No mitigations Dec 12 18:27:23.283830 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 12 18:27:23.283843 kernel: active return thunk: its_return_thunk Dec 12 18:27:23.283856 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 12 18:27:23.283881 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 12 18:27:23.283894 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 12 18:27:23.283908 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 12 18:27:23.283921 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 12 18:27:23.283935 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 12 18:27:23.283948 kernel: Freeing SMP alternatives memory: 32K Dec 12 18:27:23.283961 kernel: pid_max: default: 32768 minimum: 301 Dec 12 18:27:23.283975 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 18:27:23.283988 kernel: landlock: Up and running. Dec 12 18:27:23.284013 kernel: SELinux: Initializing. Dec 12 18:27:23.284031 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 12 18:27:23.284044 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 12 18:27:23.284057 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Dec 12 18:27:23.284070 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Dec 12 18:27:23.284084 kernel: signal: max sigframe size: 1776 Dec 12 18:27:23.284110 kernel: rcu: Hierarchical SRCU implementation. Dec 12 18:27:23.284124 kernel: rcu: Max phase no-delay instances is 400. Dec 12 18:27:23.284138 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Dec 12 18:27:23.284156 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 12 18:27:23.284170 kernel: smp: Bringing up secondary CPUs ... Dec 12 18:27:23.284683 kernel: smpboot: x86: Booting SMP configuration: Dec 12 18:27:23.284700 kernel: .... node #0, CPUs: #1 Dec 12 18:27:23.284714 kernel: smp: Brought up 1 node, 2 CPUs Dec 12 18:27:23.284728 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Dec 12 18:27:23.284743 kernel: Memory: 1914104K/2096616K available (14336K kernel code, 2444K rwdata, 29892K rodata, 15464K init, 2576K bss, 176496K reserved, 0K cma-reserved) Dec 12 18:27:23.284764 kernel: devtmpfs: initialized Dec 12 18:27:23.284778 kernel: x86/mm: Memory block size: 128MB Dec 12 18:27:23.284791 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 18:27:23.284805 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 12 18:27:23.284819 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 18:27:23.284832 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 18:27:23.284846 kernel: audit: initializing netlink subsys (disabled) Dec 12 18:27:23.284864 kernel: audit: type=2000 audit(1765564039.116:1): state=initialized audit_enabled=0 res=1 Dec 12 18:27:23.284878 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 18:27:23.284891 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 12 18:27:23.284905 kernel: cpuidle: using governor menu Dec 12 18:27:23.284918 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 18:27:23.284932 kernel: dca service started, version 1.12.1 Dec 12 18:27:23.284953 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Dec 12 18:27:23.284973 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 12 18:27:23.284987 kernel: PCI: Using configuration type 1 for base access Dec 12 18:27:23.285001 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 12 18:27:23.285014 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 18:27:23.285028 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 18:27:23.285042 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 18:27:23.285056 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 18:27:23.285074 kernel: ACPI: Added _OSI(Module Device) Dec 12 18:27:23.285088 kernel: ACPI: Added _OSI(Processor Device) Dec 12 18:27:23.285102 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 18:27:23.285115 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 18:27:23.285129 kernel: ACPI: Interpreter enabled Dec 12 18:27:23.285143 kernel: ACPI: PM: (supports S0 S5) Dec 12 18:27:23.285169 kernel: ACPI: Using IOAPIC for interrupt routing Dec 12 18:27:23.285185 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 12 18:27:23.285219 kernel: PCI: Using E820 reservations for host bridge windows Dec 12 18:27:23.285234 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 12 18:27:23.285248 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 18:27:23.285587 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 18:27:23.285827 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 12 18:27:23.286068 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 12 18:27:23.286089 kernel: PCI host bridge to bus 0000:00 Dec 12 18:27:23.288392 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 12 18:27:23.288627 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 12 18:27:23.288841 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 12 18:27:23.289051 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 12 18:27:23.289311 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 12 18:27:23.289531 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Dec 12 18:27:23.289741 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 18:27:23.290016 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 12 18:27:23.292357 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Dec 12 18:27:23.292622 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Dec 12 18:27:23.292888 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Dec 12 18:27:23.293128 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Dec 12 18:27:23.295197 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 12 18:27:23.295476 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:27:23.295710 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Dec 12 18:27:23.295958 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 12 18:27:23.296999 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 12 18:27:23.297286 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 12 18:27:23.298860 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:27:23.299109 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Dec 12 18:27:23.299394 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 12 18:27:23.299647 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 12 18:27:23.299876 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 12 18:27:23.300136 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:27:23.300397 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Dec 12 18:27:23.300627 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 12 18:27:23.300864 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 12 18:27:23.301128 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 12 18:27:23.301403 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:27:23.301631 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Dec 12 18:27:23.301858 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 12 18:27:23.302082 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 12 18:27:23.302345 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 12 18:27:23.302627 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:27:23.302864 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Dec 12 18:27:23.303090 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 12 18:27:23.303364 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 12 18:27:23.303594 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 12 18:27:23.303843 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:27:23.304093 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Dec 12 18:27:23.304430 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 12 18:27:23.304659 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 12 18:27:23.304885 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 12 18:27:23.305142 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:27:23.305404 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Dec 12 18:27:23.305650 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 12 18:27:23.305875 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 12 18:27:23.306099 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 12 18:27:23.306386 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 18:27:23.306628 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Dec 12 18:27:23.306875 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 12 18:27:23.307102 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 12 18:27:23.307364 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 12 18:27:23.307614 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 12 18:27:23.307844 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Dec 12 18:27:23.312967 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Dec 12 18:27:23.313234 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Dec 12 18:27:23.313465 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Dec 12 18:27:23.313703 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 12 18:27:23.313942 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Dec 12 18:27:23.314299 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Dec 12 18:27:23.314557 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Dec 12 18:27:23.314804 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 12 18:27:23.315032 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 12 18:27:23.315324 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 12 18:27:23.315554 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Dec 12 18:27:23.315781 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Dec 12 18:27:23.316036 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 12 18:27:23.316307 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Dec 12 18:27:23.316551 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Dec 12 18:27:23.316781 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Dec 12 18:27:23.317016 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 12 18:27:23.317278 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 12 18:27:23.317526 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 12 18:27:23.317787 kernel: pci_bus 0000:02: extended config space not accessible Dec 12 18:27:23.318040 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Dec 12 18:27:23.318336 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Dec 12 18:27:23.318570 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 12 18:27:23.318846 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Dec 12 18:27:23.319080 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Dec 12 18:27:23.320228 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 12 18:27:23.320482 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Dec 12 18:27:23.320717 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Dec 12 18:27:23.320945 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 12 18:27:23.321238 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 12 18:27:23.321479 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 12 18:27:23.321708 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 12 18:27:23.321935 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 12 18:27:23.322179 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 12 18:27:23.323286 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 12 18:27:23.323302 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 12 18:27:23.323317 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 12 18:27:23.323331 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 12 18:27:23.323344 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 12 18:27:23.323365 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 12 18:27:23.323380 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 12 18:27:23.323405 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 12 18:27:23.323419 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 12 18:27:23.323433 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 12 18:27:23.323447 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 12 18:27:23.323461 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 12 18:27:23.323475 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 12 18:27:23.323490 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 12 18:27:23.323514 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 12 18:27:23.323529 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 12 18:27:23.323542 kernel: iommu: Default domain type: Translated Dec 12 18:27:23.323557 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 12 18:27:23.323571 kernel: PCI: Using ACPI for IRQ routing Dec 12 18:27:23.323584 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 12 18:27:23.323598 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 12 18:27:23.323622 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Dec 12 18:27:23.323857 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 12 18:27:23.324108 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 12 18:27:23.325546 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 12 18:27:23.325571 kernel: vgaarb: loaded Dec 12 18:27:23.325586 kernel: clocksource: Switched to clocksource kvm-clock Dec 12 18:27:23.325600 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 18:27:23.325631 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 18:27:23.325646 kernel: pnp: PnP ACPI init Dec 12 18:27:23.325915 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 12 18:27:23.325939 kernel: pnp: PnP ACPI: found 5 devices Dec 12 18:27:23.325953 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 12 18:27:23.325967 kernel: NET: Registered PF_INET protocol family Dec 12 18:27:23.325981 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 18:27:23.326010 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 12 18:27:23.326025 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 18:27:23.326039 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 12 18:27:23.326053 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 12 18:27:23.326067 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 12 18:27:23.326081 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 12 18:27:23.326095 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 12 18:27:23.326121 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 18:27:23.326135 kernel: NET: Registered PF_XDP protocol family Dec 12 18:27:23.326396 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Dec 12 18:27:23.326626 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 12 18:27:23.326853 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 12 18:27:23.327080 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 12 18:27:23.327354 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 12 18:27:23.327582 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 12 18:27:23.327807 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 12 18:27:23.328034 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 12 18:27:23.328324 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Dec 12 18:27:23.328552 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Dec 12 18:27:23.328776 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Dec 12 18:27:23.329019 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Dec 12 18:27:23.329293 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Dec 12 18:27:23.329522 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Dec 12 18:27:23.329747 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Dec 12 18:27:23.329972 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Dec 12 18:27:23.330275 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 12 18:27:23.330576 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 12 18:27:23.330802 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 12 18:27:23.331036 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 12 18:27:23.331295 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 12 18:27:23.331521 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 12 18:27:23.331745 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 12 18:27:23.331969 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 12 18:27:23.332255 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 12 18:27:23.332483 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 12 18:27:23.332708 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 12 18:27:23.332933 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 12 18:27:23.333171 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 12 18:27:23.333425 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 12 18:27:23.333653 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 12 18:27:23.333878 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 12 18:27:23.334102 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 12 18:27:23.334398 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 12 18:27:23.334645 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 12 18:27:23.334871 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 12 18:27:23.335097 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 12 18:27:23.335437 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 12 18:27:23.335667 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 12 18:27:23.335892 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 12 18:27:23.336137 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 12 18:27:23.336405 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 12 18:27:23.336631 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 12 18:27:23.336856 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 12 18:27:23.337080 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 12 18:27:23.337386 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 12 18:27:23.337615 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 12 18:27:23.337839 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 12 18:27:23.338108 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 12 18:27:23.338377 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 12 18:27:23.338595 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 12 18:27:23.338805 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 12 18:27:23.339024 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 12 18:27:23.339273 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 12 18:27:23.339500 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 12 18:27:23.339710 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Dec 12 18:27:23.339937 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 12 18:27:23.340153 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Dec 12 18:27:23.340439 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 12 18:27:23.340668 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 12 18:27:23.340919 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Dec 12 18:27:23.341136 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 12 18:27:23.341402 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 12 18:27:23.341628 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Dec 12 18:27:23.341843 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 12 18:27:23.342068 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 12 18:27:23.342352 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Dec 12 18:27:23.342569 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 12 18:27:23.342783 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 12 18:27:23.343015 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Dec 12 18:27:23.343262 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 12 18:27:23.343498 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 12 18:27:23.343723 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Dec 12 18:27:23.343938 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 12 18:27:23.344151 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 12 18:27:23.344418 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Dec 12 18:27:23.344634 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Dec 12 18:27:23.344873 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 12 18:27:23.345097 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Dec 12 18:27:23.345349 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 12 18:27:23.345564 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 12 18:27:23.345588 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 12 18:27:23.345603 kernel: PCI: CLS 0 bytes, default 64 Dec 12 18:27:23.345634 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 12 18:27:23.345649 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Dec 12 18:27:23.345664 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 12 18:27:23.345679 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 12 18:27:23.345693 kernel: Initialise system trusted keyrings Dec 12 18:27:23.345708 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 12 18:27:23.345722 kernel: Key type asymmetric registered Dec 12 18:27:23.345748 kernel: Asymmetric key parser 'x509' registered Dec 12 18:27:23.345763 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 12 18:27:23.345777 kernel: io scheduler mq-deadline registered Dec 12 18:27:23.345792 kernel: io scheduler kyber registered Dec 12 18:27:23.345806 kernel: io scheduler bfq registered Dec 12 18:27:23.346033 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 12 18:27:23.346305 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 12 18:27:23.346552 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 18:27:23.346780 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 12 18:27:23.347007 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 12 18:27:23.347264 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 18:27:23.347492 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 12 18:27:23.347736 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 12 18:27:23.347963 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 18:27:23.348229 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 12 18:27:23.348458 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 12 18:27:23.348685 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 18:27:23.348928 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 12 18:27:23.349170 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 12 18:27:23.349415 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 18:27:23.349642 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 12 18:27:23.349868 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 12 18:27:23.350111 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 18:27:23.350380 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 12 18:27:23.350606 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 12 18:27:23.350833 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 18:27:23.351057 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 12 18:27:23.351332 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 12 18:27:23.351580 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 18:27:23.351605 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 12 18:27:23.351620 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 12 18:27:23.351635 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 12 18:27:23.351650 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 18:27:23.351681 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 12 18:27:23.351696 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 12 18:27:23.351710 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 12 18:27:23.351725 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 12 18:27:23.351750 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 12 18:27:23.352002 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 12 18:27:23.352345 kernel: rtc_cmos 00:03: registered as rtc0 Dec 12 18:27:23.352590 kernel: rtc_cmos 00:03: setting system clock to 2025-12-12T18:27:21 UTC (1765564041) Dec 12 18:27:23.352809 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 12 18:27:23.352832 kernel: intel_pstate: CPU model not supported Dec 12 18:27:23.352846 kernel: NET: Registered PF_INET6 protocol family Dec 12 18:27:23.352861 kernel: Segment Routing with IPv6 Dec 12 18:27:23.352875 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 18:27:23.352905 kernel: NET: Registered PF_PACKET protocol family Dec 12 18:27:23.352921 kernel: Key type dns_resolver registered Dec 12 18:27:23.352935 kernel: IPI shorthand broadcast: enabled Dec 12 18:27:23.352950 kernel: sched_clock: Marking stable (2219003608, 226172220)->(2569547175, -124371347) Dec 12 18:27:23.352965 kernel: registered taskstats version 1 Dec 12 18:27:23.352980 kernel: Loading compiled-in X.509 certificates Dec 12 18:27:23.352994 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: b90706f42f055ab9f35fc8fc29156d877adb12c4' Dec 12 18:27:23.353019 kernel: Demotion targets for Node 0: null Dec 12 18:27:23.353034 kernel: Key type .fscrypt registered Dec 12 18:27:23.353049 kernel: Key type fscrypt-provisioning registered Dec 12 18:27:23.353063 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 18:27:23.353078 kernel: ima: Allocated hash algorithm: sha1 Dec 12 18:27:23.353092 kernel: ima: No architecture policies found Dec 12 18:27:23.353107 kernel: clk: Disabling unused clocks Dec 12 18:27:23.353121 kernel: Freeing unused kernel image (initmem) memory: 15464K Dec 12 18:27:23.353147 kernel: Write protecting the kernel read-only data: 45056k Dec 12 18:27:23.353179 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Dec 12 18:27:23.353194 kernel: Run /init as init process Dec 12 18:27:23.353221 kernel: with arguments: Dec 12 18:27:23.353236 kernel: /init Dec 12 18:27:23.353250 kernel: with environment: Dec 12 18:27:23.353264 kernel: HOME=/ Dec 12 18:27:23.353291 kernel: TERM=linux Dec 12 18:27:23.353306 kernel: ACPI: bus type USB registered Dec 12 18:27:23.353321 kernel: usbcore: registered new interface driver usbfs Dec 12 18:27:23.353336 kernel: usbcore: registered new interface driver hub Dec 12 18:27:23.353350 kernel: usbcore: registered new device driver usb Dec 12 18:27:23.353588 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 12 18:27:23.353829 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Dec 12 18:27:23.354080 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 12 18:27:23.354411 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 12 18:27:23.354647 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Dec 12 18:27:23.354878 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Dec 12 18:27:23.355209 kernel: hub 1-0:1.0: USB hub found Dec 12 18:27:23.355466 kernel: hub 1-0:1.0: 4 ports detected Dec 12 18:27:23.355760 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 12 18:27:23.356033 kernel: hub 2-0:1.0: USB hub found Dec 12 18:27:23.356332 kernel: hub 2-0:1.0: 4 ports detected Dec 12 18:27:23.356355 kernel: SCSI subsystem initialized Dec 12 18:27:23.356370 kernel: libata version 3.00 loaded. Dec 12 18:27:23.356615 kernel: ahci 0000:00:1f.2: version 3.0 Dec 12 18:27:23.356639 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 12 18:27:23.356860 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 12 18:27:23.357086 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 12 18:27:23.357345 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 12 18:27:23.357613 kernel: scsi host0: ahci Dec 12 18:27:23.357872 kernel: scsi host1: ahci Dec 12 18:27:23.358130 kernel: scsi host2: ahci Dec 12 18:27:23.358411 kernel: scsi host3: ahci Dec 12 18:27:23.358660 kernel: scsi host4: ahci Dec 12 18:27:23.358923 kernel: scsi host5: ahci Dec 12 18:27:23.358962 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 35 lpm-pol 1 Dec 12 18:27:23.358978 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 35 lpm-pol 1 Dec 12 18:27:23.358992 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 35 lpm-pol 1 Dec 12 18:27:23.359007 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 35 lpm-pol 1 Dec 12 18:27:23.359022 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 35 lpm-pol 1 Dec 12 18:27:23.359036 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 35 lpm-pol 1 Dec 12 18:27:23.359348 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 12 18:27:23.359388 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 12 18:27:23.359403 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 12 18:27:23.359428 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 12 18:27:23.359444 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 12 18:27:23.359459 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 12 18:27:23.359473 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 12 18:27:23.359498 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 12 18:27:23.359752 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Dec 12 18:27:23.359776 kernel: usbcore: registered new interface driver usbhid Dec 12 18:27:23.359993 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 12 18:27:23.360015 kernel: usbhid: USB HID core driver Dec 12 18:27:23.360030 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 18:27:23.360045 kernel: GPT:25804799 != 125829119 Dec 12 18:27:23.360075 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 18:27:23.360090 kernel: GPT:25804799 != 125829119 Dec 12 18:27:23.360103 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 18:27:23.360118 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 12 18:27:23.360133 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 18:27:23.360459 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Dec 12 18:27:23.360499 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 18:27:23.360515 kernel: device-mapper: uevent: version 1.0.3 Dec 12 18:27:23.360529 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 18:27:23.360544 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 12 18:27:23.360558 kernel: raid6: sse2x4 gen() 7627 MB/s Dec 12 18:27:23.360573 kernel: raid6: sse2x2 gen() 5390 MB/s Dec 12 18:27:23.360587 kernel: raid6: sse2x1 gen() 5330 MB/s Dec 12 18:27:23.360614 kernel: raid6: using algorithm sse2x4 gen() 7627 MB/s Dec 12 18:27:23.360628 kernel: raid6: .... xor() 4935 MB/s, rmw enabled Dec 12 18:27:23.360643 kernel: raid6: using ssse3x2 recovery algorithm Dec 12 18:27:23.360657 kernel: xor: automatically using best checksumming function avx Dec 12 18:27:23.360672 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 18:27:23.360687 kernel: BTRFS: device fsid ea73a94a-fb20-4d45-8448-4c6f4c422a4f devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (195) Dec 12 18:27:23.360701 kernel: BTRFS info (device dm-0): first mount of filesystem ea73a94a-fb20-4d45-8448-4c6f4c422a4f Dec 12 18:27:23.360727 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:27:23.360743 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 18:27:23.360757 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 18:27:23.360771 kernel: loop: module loaded Dec 12 18:27:23.360786 kernel: loop0: detected capacity change from 0 to 100136 Dec 12 18:27:23.360800 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 18:27:23.360817 systemd[1]: Successfully made /usr/ read-only. Dec 12 18:27:23.360847 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:27:23.360864 systemd[1]: Detected virtualization kvm. Dec 12 18:27:23.360879 systemd[1]: Detected architecture x86-64. Dec 12 18:27:23.360894 systemd[1]: Running in initrd. Dec 12 18:27:23.360909 systemd[1]: No hostname configured, using default hostname. Dec 12 18:27:23.360935 systemd[1]: Hostname set to . Dec 12 18:27:23.360952 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 12 18:27:23.360968 systemd[1]: Queued start job for default target initrd.target. Dec 12 18:27:23.360983 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 18:27:23.360999 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:27:23.361014 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:27:23.361030 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 18:27:23.361057 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:27:23.361074 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 18:27:23.361090 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 18:27:23.361106 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:27:23.361122 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:27:23.361137 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:27:23.361180 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:27:23.361197 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:27:23.361225 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:27:23.361241 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:27:23.361256 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:27:23.361271 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:27:23.361286 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 18:27:23.361316 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 18:27:23.361332 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 18:27:23.361348 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:27:23.361363 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:27:23.361379 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:27:23.361395 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:27:23.361421 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 18:27:23.361437 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 18:27:23.361452 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:27:23.361468 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 18:27:23.361484 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 18:27:23.361499 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 18:27:23.361515 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:27:23.361542 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:27:23.361558 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:27:23.361574 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 18:27:23.361600 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:27:23.361616 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 18:27:23.361632 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 18:27:23.361702 systemd-journald[330]: Collecting audit messages is enabled. Dec 12 18:27:23.361749 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 18:27:23.361764 kernel: Bridge firewalling registered Dec 12 18:27:23.361780 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:27:23.361796 kernel: audit: type=1130 audit(1765564043.307:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:23.361811 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:27:23.361827 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:27:23.361855 systemd-journald[330]: Journal started Dec 12 18:27:23.361882 systemd-journald[330]: Runtime Journal (/run/log/journal/ed72cb880d474555b2723233143bc54e) is 4.7M, max 37.8M, 33M free. Dec 12 18:27:23.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:23.297907 systemd-modules-load[333]: Inserted module 'br_netfilter' Dec 12 18:27:23.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:23.396182 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:27:23.396230 kernel: audit: type=1130 audit(1765564043.393:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:23.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:23.407232 kernel: audit: type=1130 audit(1765564043.401:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:23.407982 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:27:23.420740 kernel: audit: type=1130 audit(1765564043.408:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:23.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:23.414417 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 18:27:23.427367 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:27:23.434380 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:27:23.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:23.440098 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:27:23.446686 kernel: audit: type=1130 audit(1765564043.440:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:23.443665 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:27:23.442000 audit: BPF prog-id=6 op=LOAD Dec 12 18:27:23.450412 kernel: audit: type=1334 audit(1765564043.442:7): prog-id=6 op=LOAD Dec 12 18:27:23.460858 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:27:23.462842 systemd-tmpfiles[355]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 18:27:23.474889 kernel: audit: type=1130 audit(1765564043.462:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:23.474926 kernel: audit: type=1130 audit(1765564043.469:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:23.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:23.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:23.465259 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:27:23.472193 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 18:27:23.479033 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:27:23.487403 kernel: audit: type=1130 audit(1765564043.480:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:23.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:23.499210 dracut-cmdline[370]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 12 18:27:23.554429 systemd-resolved[363]: Positive Trust Anchors: Dec 12 18:27:23.555536 systemd-resolved[363]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:27:23.555548 systemd-resolved[363]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 18:27:23.555593 systemd-resolved[363]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:27:23.597303 systemd-resolved[363]: Defaulting to hostname 'linux'. Dec 12 18:27:23.600089 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:27:23.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:23.601832 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:27:23.627223 kernel: Loading iSCSI transport class v2.0-870. Dec 12 18:27:23.646191 kernel: iscsi: registered transport (tcp) Dec 12 18:27:23.674559 kernel: iscsi: registered transport (qla4xxx) Dec 12 18:27:23.674668 kernel: QLogic iSCSI HBA Driver Dec 12 18:27:23.712954 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:27:23.750977 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:27:23.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:23.754314 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:27:23.823960 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 18:27:23.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:23.826937 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 18:27:23.828612 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 18:27:23.877570 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:27:23.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:23.879000 audit: BPF prog-id=7 op=LOAD Dec 12 18:27:23.880000 audit: BPF prog-id=8 op=LOAD Dec 12 18:27:23.881435 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:27:23.919482 systemd-udevd[610]: Using default interface naming scheme 'v257'. Dec 12 18:27:23.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:23.935482 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:27:23.941354 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 18:27:23.980623 dracut-pre-trigger[674]: rd.md=0: removing MD RAID activation Dec 12 18:27:23.990973 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:27:23.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:23.993000 audit: BPF prog-id=9 op=LOAD Dec 12 18:27:23.995425 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:27:24.023420 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:27:24.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:24.028391 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:27:24.056325 systemd-networkd[724]: lo: Link UP Dec 12 18:27:24.056339 systemd-networkd[724]: lo: Gained carrier Dec 12 18:27:24.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:24.058830 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:27:24.059670 systemd[1]: Reached target network.target - Network. Dec 12 18:27:24.186037 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:27:24.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:24.192239 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 18:27:24.298728 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 12 18:27:24.344244 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 12 18:27:24.358844 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 18:27:24.370732 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 12 18:27:24.374368 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 18:27:24.393704 disk-uuid[774]: Primary Header is updated. Dec 12 18:27:24.393704 disk-uuid[774]: Secondary Entries is updated. Dec 12 18:27:24.393704 disk-uuid[774]: Secondary Header is updated. Dec 12 18:27:24.471203 kernel: cryptd: max_cpu_qlen set to 1000 Dec 12 18:27:24.508450 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:27:24.508884 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:27:24.523610 kernel: kauditd_printk_skb: 12 callbacks suppressed Dec 12 18:27:24.523647 kernel: audit: type=1131 audit(1765564044.511:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:24.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:24.511381 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:27:24.527635 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:27:24.547365 kernel: AES CTR mode by8 optimization enabled Dec 12 18:27:24.568198 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 12 18:27:24.614980 systemd-networkd[724]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 18:27:24.614996 systemd-networkd[724]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:27:24.616926 systemd-networkd[724]: eth0: Link UP Dec 12 18:27:24.617359 systemd-networkd[724]: eth0: Gained carrier Dec 12 18:27:24.617375 systemd-networkd[724]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 18:27:24.642271 systemd-networkd[724]: eth0: DHCPv4 address 10.230.23.66/30, gateway 10.230.23.65 acquired from 10.230.23.65 Dec 12 18:27:24.717389 kernel: audit: type=1130 audit(1765564044.709:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:24.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:24.708693 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:27:24.726865 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 18:27:24.733443 kernel: audit: type=1130 audit(1765564044.727:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:24.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:24.735965 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:27:24.736843 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:27:24.738574 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:27:24.741928 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 18:27:24.782404 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:27:24.789249 kernel: audit: type=1130 audit(1765564044.783:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:24.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:25.451071 disk-uuid[775]: Warning: The kernel is still using the old partition table. Dec 12 18:27:25.451071 disk-uuid[775]: The new table will be used at the next reboot or after you Dec 12 18:27:25.451071 disk-uuid[775]: run partprobe(8) or kpartx(8) Dec 12 18:27:25.451071 disk-uuid[775]: The operation has completed successfully. Dec 12 18:27:25.462322 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 18:27:25.462516 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 18:27:25.474988 kernel: audit: type=1130 audit(1765564045.464:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:25.475037 kernel: audit: type=1131 audit(1765564045.464:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:25.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:25.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:25.468421 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 18:27:25.522750 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (866) Dec 12 18:27:25.522845 kernel: BTRFS info (device vda6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 12 18:27:25.525760 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:27:25.532303 kernel: BTRFS info (device vda6): turning on async discard Dec 12 18:27:25.532373 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 18:27:25.543206 kernel: BTRFS info (device vda6): last unmount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 12 18:27:25.544681 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 18:27:25.551463 kernel: audit: type=1130 audit(1765564045.545:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:25.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:25.548357 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 18:27:25.796347 ignition[885]: Ignition 2.22.0 Dec 12 18:27:25.796375 ignition[885]: Stage: fetch-offline Dec 12 18:27:25.796459 ignition[885]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:27:25.796480 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 18:27:25.805582 kernel: audit: type=1130 audit(1765564045.799:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:25.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:25.798842 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:27:25.796626 ignition[885]: parsed url from cmdline: "" Dec 12 18:27:25.802421 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 12 18:27:25.796634 ignition[885]: no config URL provided Dec 12 18:27:25.796651 ignition[885]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:27:25.796671 ignition[885]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:27:25.796688 ignition[885]: failed to fetch config: resource requires networking Dec 12 18:27:25.797054 ignition[885]: Ignition finished successfully Dec 12 18:27:25.841125 ignition[891]: Ignition 2.22.0 Dec 12 18:27:25.841151 ignition[891]: Stage: fetch Dec 12 18:27:25.841374 ignition[891]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:27:25.841392 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 18:27:25.841504 ignition[891]: parsed url from cmdline: "" Dec 12 18:27:25.841511 ignition[891]: no config URL provided Dec 12 18:27:25.841521 ignition[891]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:27:25.841535 ignition[891]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:27:25.841699 ignition[891]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 12 18:27:25.841969 ignition[891]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 12 18:27:25.842004 ignition[891]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 12 18:27:25.856397 ignition[891]: GET result: OK Dec 12 18:27:25.857045 ignition[891]: parsing config with SHA512: 96cac92a122efcfbdacd82002460bd55ea45ab5cc05062af32bb75fef5241d48e5c9a80e3768012d88910d40c73b6f597a6c5200fce367d1da243d3cd761ecec Dec 12 18:27:25.862143 unknown[891]: fetched base config from "system" Dec 12 18:27:25.863114 unknown[891]: fetched base config from "system" Dec 12 18:27:25.863130 unknown[891]: fetched user config from "openstack" Dec 12 18:27:25.863688 ignition[891]: fetch: fetch complete Dec 12 18:27:25.863697 ignition[891]: fetch: fetch passed Dec 12 18:27:25.863773 ignition[891]: Ignition finished successfully Dec 12 18:27:25.872671 kernel: audit: type=1130 audit(1765564045.867:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:25.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:25.866439 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 12 18:27:25.870361 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 18:27:25.914891 ignition[897]: Ignition 2.22.0 Dec 12 18:27:25.914919 ignition[897]: Stage: kargs Dec 12 18:27:25.915152 ignition[897]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:27:25.915200 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 18:27:25.918272 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 18:27:25.925349 kernel: audit: type=1130 audit(1765564045.919:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:25.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:25.916540 ignition[897]: kargs: kargs passed Dec 12 18:27:25.923384 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 18:27:25.916618 ignition[897]: Ignition finished successfully Dec 12 18:27:25.961327 ignition[903]: Ignition 2.22.0 Dec 12 18:27:25.961353 ignition[903]: Stage: disks Dec 12 18:27:25.961562 ignition[903]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:27:25.961579 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 18:27:25.964381 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 18:27:25.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:25.962704 ignition[903]: disks: disks passed Dec 12 18:27:25.966560 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 18:27:25.962782 ignition[903]: Ignition finished successfully Dec 12 18:27:25.967655 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 18:27:25.968978 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:27:25.970524 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:27:25.971975 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:27:25.976414 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 18:27:26.025920 systemd-fsck[911]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Dec 12 18:27:26.030769 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 18:27:26.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:26.035819 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 18:27:26.190201 kernel: EXT4-fs (vda9): mounted filesystem 7cac6192-738c-43cc-9341-24f71d091e91 r/w with ordered data mode. Quota mode: none. Dec 12 18:27:26.190455 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 18:27:26.191769 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 18:27:26.195249 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:27:26.197051 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 18:27:26.199003 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 18:27:26.202350 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Dec 12 18:27:26.203102 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 18:27:26.203170 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:27:26.218255 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 18:27:26.221361 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 18:27:26.240932 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (919) Dec 12 18:27:26.257050 kernel: BTRFS info (device vda6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 12 18:27:26.257105 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:27:26.275016 kernel: BTRFS info (device vda6): turning on async discard Dec 12 18:27:26.275078 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 18:27:26.284869 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:27:26.316212 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 18:27:26.323543 initrd-setup-root[947]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 18:27:26.330171 initrd-setup-root[954]: cut: /sysroot/etc/group: No such file or directory Dec 12 18:27:26.340098 initrd-setup-root[961]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 18:27:26.349200 initrd-setup-root[968]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 18:27:26.464814 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 18:27:26.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:26.467021 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 18:27:26.469356 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 18:27:26.492500 kernel: BTRFS info (device vda6): last unmount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 12 18:27:26.500308 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 18:27:26.520363 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 18:27:26.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:26.539196 ignition[1037]: INFO : Ignition 2.22.0 Dec 12 18:27:26.539196 ignition[1037]: INFO : Stage: mount Dec 12 18:27:26.543298 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:27:26.543298 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 18:27:26.543298 ignition[1037]: INFO : mount: mount passed Dec 12 18:27:26.543298 ignition[1037]: INFO : Ignition finished successfully Dec 12 18:27:26.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:26.542465 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 18:27:26.556392 systemd-networkd[724]: eth0: Gained IPv6LL Dec 12 18:27:27.351184 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 18:27:28.066343 systemd-networkd[724]: eth0: Ignoring DHCPv6 address 2a02:1348:179:85d0:24:19ff:fee6:1742/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:85d0:24:19ff:fee6:1742/64 assigned by NDisc. Dec 12 18:27:28.066355 systemd-networkd[724]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 12 18:27:29.361200 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 18:27:33.373200 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 18:27:33.379676 coreos-metadata[921]: Dec 12 18:27:33.379 WARN failed to locate config-drive, using the metadata service API instead Dec 12 18:27:33.405286 coreos-metadata[921]: Dec 12 18:27:33.405 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 12 18:27:33.419485 coreos-metadata[921]: Dec 12 18:27:33.419 INFO Fetch successful Dec 12 18:27:33.420335 coreos-metadata[921]: Dec 12 18:27:33.420 INFO wrote hostname srv-vv1nl.gb1.brightbox.com to /sysroot/etc/hostname Dec 12 18:27:33.422537 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 12 18:27:33.437307 kernel: kauditd_printk_skb: 5 callbacks suppressed Dec 12 18:27:33.437365 kernel: audit: type=1130 audit(1765564053.424:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:33.437391 kernel: audit: type=1131 audit(1765564053.424:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:33.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:33.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:33.422721 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Dec 12 18:27:33.427314 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 18:27:33.452372 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:27:33.493203 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1053) Dec 12 18:27:33.497176 kernel: BTRFS info (device vda6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 12 18:27:33.497215 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:27:33.504009 kernel: BTRFS info (device vda6): turning on async discard Dec 12 18:27:33.504112 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 18:27:33.506940 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:27:33.555791 ignition[1071]: INFO : Ignition 2.22.0 Dec 12 18:27:33.555791 ignition[1071]: INFO : Stage: files Dec 12 18:27:33.555791 ignition[1071]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:27:33.555791 ignition[1071]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 18:27:33.555791 ignition[1071]: DEBUG : files: compiled without relabeling support, skipping Dec 12 18:27:33.560377 ignition[1071]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 18:27:33.560377 ignition[1071]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 18:27:33.566554 ignition[1071]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 18:27:33.566554 ignition[1071]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 18:27:33.568829 ignition[1071]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 18:27:33.568065 unknown[1071]: wrote ssh authorized keys file for user: core Dec 12 18:27:33.570927 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 12 18:27:33.570927 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Dec 12 18:27:33.801827 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 18:27:34.082278 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 12 18:27:34.083676 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 18:27:34.083676 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 18:27:34.083676 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:27:34.083676 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:27:34.083676 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:27:34.083676 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:27:34.083676 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:27:34.083676 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:27:34.092510 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:27:34.092510 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:27:34.092510 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:27:34.092510 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:27:34.092510 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:27:34.092510 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Dec 12 18:27:34.468140 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 18:27:37.990134 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:27:37.998856 ignition[1071]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 18:27:37.998856 ignition[1071]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:27:38.001776 ignition[1071]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:27:38.001776 ignition[1071]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 18:27:38.001776 ignition[1071]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 12 18:27:38.005076 ignition[1071]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 18:27:38.005076 ignition[1071]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:27:38.005076 ignition[1071]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:27:38.005076 ignition[1071]: INFO : files: files passed Dec 12 18:27:38.005076 ignition[1071]: INFO : Ignition finished successfully Dec 12 18:27:38.019697 kernel: audit: type=1130 audit(1765564058.008:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.006003 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 18:27:38.011433 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 18:27:38.021449 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 18:27:38.030460 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 18:27:38.032798 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 18:27:38.045415 kernel: audit: type=1130 audit(1765564058.033:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.045502 kernel: audit: type=1131 audit(1765564058.038:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.058007 initrd-setup-root-after-ignition[1102]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:27:38.058007 initrd-setup-root-after-ignition[1102]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:27:38.061278 initrd-setup-root-after-ignition[1106]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:27:38.061785 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:27:38.069579 kernel: audit: type=1130 audit(1765564058.063:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.064026 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 18:27:38.071985 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 18:27:38.139400 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 18:27:38.139596 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 18:27:38.151768 kernel: audit: type=1130 audit(1765564058.140:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.151841 kernel: audit: type=1131 audit(1765564058.141:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.142134 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 18:27:38.158281 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 18:27:38.160054 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 18:27:38.161791 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 18:27:38.194438 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:27:38.201102 kernel: audit: type=1130 audit(1765564058.195:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.199361 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 18:27:38.236131 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 18:27:38.237618 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:27:38.238494 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:27:38.240349 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 18:27:38.241959 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 18:27:38.249025 kernel: audit: type=1131 audit(1765564058.243:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.242290 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:27:38.249229 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 18:27:38.251065 systemd[1]: Stopped target basic.target - Basic System. Dec 12 18:27:38.252450 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 18:27:38.254801 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:27:38.255715 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 18:27:38.257343 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:27:38.259070 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 18:27:38.260601 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:27:38.262290 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 18:27:38.263736 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 18:27:38.265471 systemd[1]: Stopped target swap.target - Swaps. Dec 12 18:27:38.266666 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 18:27:38.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.266894 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:27:38.268643 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:27:38.269631 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:27:38.271076 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 18:27:38.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.271325 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:27:38.272739 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 18:27:38.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.273011 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 18:27:38.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.274796 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 18:27:38.274988 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:27:38.276982 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 18:27:38.277247 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 18:27:38.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.280450 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 18:27:38.283517 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 18:27:38.283776 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:27:38.287455 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 18:27:38.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.289270 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 18:27:38.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.290406 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:27:38.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.291996 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 18:27:38.292272 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:27:38.294230 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 18:27:38.294502 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:27:38.309895 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 18:27:38.312231 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 18:27:38.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.325065 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 18:27:38.330764 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 18:27:38.331435 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 18:27:38.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.340786 ignition[1126]: INFO : Ignition 2.22.0 Dec 12 18:27:38.340786 ignition[1126]: INFO : Stage: umount Dec 12 18:27:38.342604 ignition[1126]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:27:38.342604 ignition[1126]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 18:27:38.342604 ignition[1126]: INFO : umount: umount passed Dec 12 18:27:38.342604 ignition[1126]: INFO : Ignition finished successfully Dec 12 18:27:38.344842 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 18:27:38.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.346294 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 18:27:38.347956 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 18:27:38.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.348110 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 18:27:38.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.349690 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 18:27:38.349766 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 18:27:38.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.351314 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 12 18:27:38.351407 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 12 18:27:38.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.353747 systemd[1]: Stopped target network.target - Network. Dec 12 18:27:38.355041 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 18:27:38.355129 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:27:38.356568 systemd[1]: Stopped target paths.target - Path Units. Dec 12 18:27:38.359645 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 18:27:38.361010 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:27:38.362393 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 18:27:38.363863 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 18:27:38.365578 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 18:27:38.365664 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:27:38.366880 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 18:27:38.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.366989 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:27:38.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.368300 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 12 18:27:38.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.368354 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 12 18:27:38.369838 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 18:27:38.369954 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 18:27:38.371273 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 18:27:38.371345 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 18:27:38.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.372577 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 18:27:38.372653 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 18:27:38.374175 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 18:27:38.377642 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 18:27:38.384366 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 18:27:38.384628 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 18:27:38.397121 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 18:27:38.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.397500 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 18:27:38.400000 audit: BPF prog-id=9 op=UNLOAD Dec 12 18:27:38.400876 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 18:27:38.402531 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 18:27:38.403000 audit: BPF prog-id=6 op=UNLOAD Dec 12 18:27:38.402648 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:27:38.405877 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 18:27:38.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.406622 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 18:27:38.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.406714 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:27:38.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.407634 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 18:27:38.407709 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:27:38.409037 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 18:27:38.409108 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 18:27:38.410553 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:27:38.426112 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 18:27:38.435090 kernel: kauditd_printk_skb: 26 callbacks suppressed Dec 12 18:27:38.435137 kernel: audit: type=1131 audit(1765564058.427:74): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.426415 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:27:38.428481 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 18:27:38.428554 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 18:27:38.437983 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 18:27:38.452649 kernel: audit: type=1131 audit(1765564058.441:75): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.452708 kernel: audit: type=1131 audit(1765564058.447:76): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.438065 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:27:38.438781 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 18:27:38.460230 kernel: audit: type=1131 audit(1765564058.454:77): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.438874 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:27:38.446409 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 18:27:38.446498 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 18:27:38.453262 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 18:27:38.471341 kernel: audit: type=1131 audit(1765564058.465:78): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.453352 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:27:38.461544 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 18:27:38.479049 kernel: audit: type=1131 audit(1765564058.473:79): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.463507 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 18:27:38.489271 kernel: audit: type=1131 audit(1765564058.479:80): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.463591 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:27:38.465612 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 18:27:38.465690 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:27:38.473335 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:27:38.505665 kernel: audit: type=1130 audit(1765564058.494:81): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.505707 kernel: audit: type=1131 audit(1765564058.494:82): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.473440 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:27:38.493508 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 18:27:38.493764 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 18:27:38.512964 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 18:27:38.513220 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 18:27:38.520241 kernel: audit: type=1131 audit(1765564058.514:83): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:38.515604 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 18:27:38.522347 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 18:27:38.544536 systemd[1]: Switching root. Dec 12 18:27:38.591179 systemd-journald[330]: Received SIGTERM from PID 1 (systemd). Dec 12 18:27:38.591299 systemd-journald[330]: Journal stopped Dec 12 18:27:40.292959 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 18:27:40.296963 kernel: SELinux: policy capability open_perms=1 Dec 12 18:27:40.297034 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 18:27:40.297067 kernel: SELinux: policy capability always_check_network=0 Dec 12 18:27:40.297109 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 18:27:40.297138 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 18:27:40.297433 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 18:27:40.297478 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 18:27:40.297521 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 18:27:40.297561 systemd[1]: Successfully loaded SELinux policy in 92.644ms. Dec 12 18:27:40.297622 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.427ms. Dec 12 18:27:40.297661 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:27:40.297702 systemd[1]: Detected virtualization kvm. Dec 12 18:27:40.297745 systemd[1]: Detected architecture x86-64. Dec 12 18:27:40.297777 systemd[1]: Detected first boot. Dec 12 18:27:40.297810 systemd[1]: Hostname set to . Dec 12 18:27:40.297833 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 12 18:27:40.297865 zram_generator::config[1169]: No configuration found. Dec 12 18:27:40.297908 kernel: Guest personality initialized and is inactive Dec 12 18:27:40.297944 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 12 18:27:40.297981 kernel: Initialized host personality Dec 12 18:27:40.298015 kernel: NET: Registered PF_VSOCK protocol family Dec 12 18:27:40.298047 systemd[1]: Populated /etc with preset unit settings. Dec 12 18:27:40.301932 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 18:27:40.302017 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 18:27:40.302073 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 18:27:40.302133 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 18:27:40.302207 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 18:27:40.302243 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 18:27:40.302268 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 18:27:40.302311 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 18:27:40.302336 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 18:27:40.302361 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 18:27:40.302397 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 18:27:40.302433 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:27:40.302464 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:27:40.302489 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 18:27:40.302518 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 18:27:40.302543 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 18:27:40.302575 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:27:40.302616 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 12 18:27:40.302640 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:27:40.302664 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:27:40.302693 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 18:27:40.302718 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 18:27:40.302756 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 18:27:40.302792 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 18:27:40.302831 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:27:40.302861 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:27:40.302895 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 12 18:27:40.302928 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:27:40.302958 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:27:40.302982 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 18:27:40.303018 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 18:27:40.303042 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 18:27:40.303074 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 18:27:40.303099 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 12 18:27:40.303134 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:27:40.304634 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 12 18:27:40.304701 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 12 18:27:40.304743 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:27:40.304774 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:27:40.304798 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 18:27:40.304828 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 18:27:40.304851 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 18:27:40.310669 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 18:27:40.310753 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:27:40.310809 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 18:27:40.310835 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 18:27:40.310859 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 18:27:40.310903 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 18:27:40.310940 systemd[1]: Reached target machines.target - Containers. Dec 12 18:27:40.310971 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 18:27:40.311007 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:27:40.311033 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:27:40.311057 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 18:27:40.311080 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:27:40.311102 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:27:40.311134 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:27:40.312718 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 18:27:40.312781 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:27:40.312814 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 18:27:40.312848 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 18:27:40.312880 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 18:27:40.312933 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 18:27:40.312969 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 18:27:40.313007 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:27:40.313031 kernel: fuse: init (API version 7.41) Dec 12 18:27:40.313056 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:27:40.313079 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:27:40.313108 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:27:40.313148 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 18:27:40.313205 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 18:27:40.313231 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:27:40.313265 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:27:40.313289 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 18:27:40.313312 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 18:27:40.313353 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 18:27:40.313379 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 18:27:40.313402 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 18:27:40.313437 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 18:27:40.313462 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:27:40.313508 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 18:27:40.313533 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 18:27:40.313565 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:27:40.313599 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:27:40.313629 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:27:40.313659 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:27:40.313699 kernel: ACPI: bus type drm_connector registered Dec 12 18:27:40.313723 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 18:27:40.313746 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 18:27:40.313769 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:27:40.313791 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:27:40.313814 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:27:40.313844 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:27:40.313880 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:27:40.313995 systemd-journald[1257]: Collecting audit messages is enabled. Dec 12 18:27:40.314069 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:27:40.314109 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 18:27:40.314141 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 18:27:40.318438 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:27:40.318524 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 12 18:27:40.318552 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 18:27:40.318584 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:27:40.318618 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 18:27:40.318647 systemd-journald[1257]: Journal started Dec 12 18:27:40.318698 systemd-journald[1257]: Runtime Journal (/run/log/journal/ed72cb880d474555b2723233143bc54e) is 4.7M, max 37.8M, 33M free. Dec 12 18:27:40.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:40.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:40.105000 audit: BPF prog-id=14 op=UNLOAD Dec 12 18:27:40.107000 audit: BPF prog-id=13 op=UNLOAD Dec 12 18:27:40.108000 audit: BPF prog-id=15 op=LOAD Dec 12 18:27:40.109000 audit: BPF prog-id=16 op=LOAD Dec 12 18:27:40.109000 audit: BPF prog-id=17 op=LOAD Dec 12 18:27:40.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:40.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:40.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:40.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:40.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:40.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:40.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:40.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:40.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:40.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:40.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:40.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:40.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:40.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:40.284000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 12 18:27:40.284000 audit[1257]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7fff022c98b0 a2=4000 a3=0 items=0 ppid=1 pid=1257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:27:40.284000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 12 18:27:40.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:40.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:40.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:39.767774 systemd[1]: Queued start job for default target multi-user.target. Dec 12 18:27:40.327876 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:27:40.327947 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 18:27:39.793515 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 12 18:27:39.794587 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 18:27:40.335278 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 18:27:40.341210 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:27:40.348200 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 18:27:40.348351 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:27:40.359186 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:27:40.366193 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 18:27:40.373191 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:27:40.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:40.376240 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 18:27:40.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:40.377555 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 18:27:40.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:40.403590 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 18:27:40.408494 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 18:27:40.414568 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 18:27:40.420529 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 18:27:40.436189 kernel: loop1: detected capacity change from 0 to 8 Dec 12 18:27:40.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:40.466613 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:27:40.484353 systemd-journald[1257]: Time spent on flushing to /var/log/journal/ed72cb880d474555b2723233143bc54e is 112.584ms for 1308 entries. Dec 12 18:27:40.484353 systemd-journald[1257]: System Journal (/var/log/journal/ed72cb880d474555b2723233143bc54e) is 8M, max 588.1M, 580.1M free. Dec 12 18:27:40.615978 systemd-journald[1257]: Received client request to flush runtime journal. Dec 12 18:27:40.616046 kernel: loop2: detected capacity change from 0 to 224512 Dec 12 18:27:40.616076 kernel: loop3: detected capacity change from 0 to 111544 Dec 12 18:27:40.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:40.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:40.529000 audit: BPF prog-id=18 op=LOAD Dec 12 18:27:40.529000 audit: BPF prog-id=19 op=LOAD Dec 12 18:27:40.529000 audit: BPF prog-id=20 op=LOAD Dec 12 18:27:40.534000 audit: BPF prog-id=21 op=LOAD Dec 12 18:27:40.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:40.588000 audit: BPF prog-id=22 op=LOAD Dec 12 18:27:40.588000 audit: BPF prog-id=23 op=LOAD Dec 12 18:27:40.591000 audit: BPF prog-id=24 op=LOAD Dec 12 18:27:40.598000 audit: BPF prog-id=25 op=LOAD Dec 12 18:27:40.599000 audit: BPF prog-id=26 op=LOAD Dec 12 18:27:40.599000 audit: BPF prog-id=27 op=LOAD Dec 12 18:27:40.511345 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 18:27:40.525394 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 18:27:40.531266 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 12 18:27:40.536480 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:27:40.544449 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:27:40.573768 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:27:40.593558 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 18:27:40.601586 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 12 18:27:40.621300 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 18:27:40.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:40.642287 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Dec 12 18:27:40.643068 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Dec 12 18:27:40.652186 kernel: loop4: detected capacity change from 0 to 119256 Dec 12 18:27:40.668979 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:27:40.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:40.700196 kernel: loop5: detected capacity change from 0 to 8 Dec 12 18:27:40.707195 kernel: loop6: detected capacity change from 0 to 224512 Dec 12 18:27:40.721428 systemd-nsresourced[1324]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 12 18:27:40.724805 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 18:27:40.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:40.726718 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 12 18:27:40.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:40.733186 kernel: loop7: detected capacity change from 0 to 111544 Dec 12 18:27:40.751182 kernel: loop1: detected capacity change from 0 to 119256 Dec 12 18:27:40.763848 (sd-merge)[1330]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-openstack.raw'. Dec 12 18:27:40.773007 (sd-merge)[1330]: Merged extensions into '/usr'. Dec 12 18:27:40.787709 systemd[1]: Reload requested from client PID 1287 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 18:27:40.787748 systemd[1]: Reloading... Dec 12 18:27:40.883194 zram_generator::config[1369]: No configuration found. Dec 12 18:27:40.956717 systemd-oomd[1317]: No swap; memory pressure usage will be degraded Dec 12 18:27:40.962764 systemd-resolved[1318]: Positive Trust Anchors: Dec 12 18:27:40.962793 systemd-resolved[1318]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:27:40.962801 systemd-resolved[1318]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 18:27:40.962846 systemd-resolved[1318]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:27:40.988332 systemd-resolved[1318]: Using system hostname 'srv-vv1nl.gb1.brightbox.com'. Dec 12 18:27:41.286225 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 18:27:41.287215 systemd[1]: Reloading finished in 498 ms. Dec 12 18:27:41.318841 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 12 18:27:41.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:41.320026 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:27:41.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:41.321355 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 18:27:41.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:41.327510 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:27:41.330492 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 18:27:41.337891 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 18:27:41.348463 systemd[1]: Starting ensure-sysext.service... Dec 12 18:27:41.356613 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:27:41.374000 audit: BPF prog-id=28 op=LOAD Dec 12 18:27:41.374000 audit: BPF prog-id=15 op=UNLOAD Dec 12 18:27:41.374000 audit: BPF prog-id=29 op=LOAD Dec 12 18:27:41.374000 audit: BPF prog-id=30 op=LOAD Dec 12 18:27:41.374000 audit: BPF prog-id=16 op=UNLOAD Dec 12 18:27:41.374000 audit: BPF prog-id=17 op=UNLOAD Dec 12 18:27:41.376000 audit: BPF prog-id=31 op=LOAD Dec 12 18:27:41.376000 audit: BPF prog-id=21 op=UNLOAD Dec 12 18:27:41.379000 audit: BPF prog-id=32 op=LOAD Dec 12 18:27:41.382000 audit: BPF prog-id=25 op=UNLOAD Dec 12 18:27:41.382000 audit: BPF prog-id=33 op=LOAD Dec 12 18:27:41.382000 audit: BPF prog-id=34 op=LOAD Dec 12 18:27:41.382000 audit: BPF prog-id=26 op=UNLOAD Dec 12 18:27:41.382000 audit: BPF prog-id=27 op=UNLOAD Dec 12 18:27:41.384000 audit: BPF prog-id=35 op=LOAD Dec 12 18:27:41.384000 audit: BPF prog-id=18 op=UNLOAD Dec 12 18:27:41.384000 audit: BPF prog-id=36 op=LOAD Dec 12 18:27:41.386000 audit: BPF prog-id=37 op=LOAD Dec 12 18:27:41.386000 audit: BPF prog-id=19 op=UNLOAD Dec 12 18:27:41.386000 audit: BPF prog-id=20 op=UNLOAD Dec 12 18:27:41.389000 audit: BPF prog-id=38 op=LOAD Dec 12 18:27:41.390000 audit: BPF prog-id=22 op=UNLOAD Dec 12 18:27:41.390000 audit: BPF prog-id=39 op=LOAD Dec 12 18:27:41.391000 audit: BPF prog-id=40 op=LOAD Dec 12 18:27:41.391000 audit: BPF prog-id=23 op=UNLOAD Dec 12 18:27:41.391000 audit: BPF prog-id=24 op=UNLOAD Dec 12 18:27:41.393623 systemd-tmpfiles[1431]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 18:27:41.393682 systemd-tmpfiles[1431]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 18:27:41.394233 systemd-tmpfiles[1431]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 18:27:41.396411 systemd-tmpfiles[1431]: ACLs are not supported, ignoring. Dec 12 18:27:41.396432 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 18:27:41.396513 systemd-tmpfiles[1431]: ACLs are not supported, ignoring. Dec 12 18:27:41.398305 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 18:27:41.405780 systemd-tmpfiles[1431]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:27:41.405802 systemd-tmpfiles[1431]: Skipping /boot Dec 12 18:27:41.411494 systemd[1]: Reload requested from client PID 1430 ('systemctl') (unit ensure-sysext.service)... Dec 12 18:27:41.411529 systemd[1]: Reloading... Dec 12 18:27:41.423907 systemd-tmpfiles[1431]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:27:41.423930 systemd-tmpfiles[1431]: Skipping /boot Dec 12 18:27:41.521206 zram_generator::config[1465]: No configuration found. Dec 12 18:27:41.836584 systemd[1]: Reloading finished in 424 ms. Dec 12 18:27:41.852416 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 18:27:41.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:41.855000 audit: BPF prog-id=41 op=LOAD Dec 12 18:27:41.855000 audit: BPF prog-id=35 op=UNLOAD Dec 12 18:27:41.855000 audit: BPF prog-id=42 op=LOAD Dec 12 18:27:41.855000 audit: BPF prog-id=43 op=LOAD Dec 12 18:27:41.855000 audit: BPF prog-id=36 op=UNLOAD Dec 12 18:27:41.855000 audit: BPF prog-id=37 op=UNLOAD Dec 12 18:27:41.857000 audit: BPF prog-id=44 op=LOAD Dec 12 18:27:41.857000 audit: BPF prog-id=32 op=UNLOAD Dec 12 18:27:41.857000 audit: BPF prog-id=45 op=LOAD Dec 12 18:27:41.857000 audit: BPF prog-id=46 op=LOAD Dec 12 18:27:41.857000 audit: BPF prog-id=33 op=UNLOAD Dec 12 18:27:41.857000 audit: BPF prog-id=34 op=UNLOAD Dec 12 18:27:41.858000 audit: BPF prog-id=47 op=LOAD Dec 12 18:27:41.858000 audit: BPF prog-id=38 op=UNLOAD Dec 12 18:27:41.858000 audit: BPF prog-id=48 op=LOAD Dec 12 18:27:41.858000 audit: BPF prog-id=49 op=LOAD Dec 12 18:27:41.858000 audit: BPF prog-id=39 op=UNLOAD Dec 12 18:27:41.858000 audit: BPF prog-id=40 op=UNLOAD Dec 12 18:27:41.860000 audit: BPF prog-id=50 op=LOAD Dec 12 18:27:41.860000 audit: BPF prog-id=28 op=UNLOAD Dec 12 18:27:41.860000 audit: BPF prog-id=51 op=LOAD Dec 12 18:27:41.860000 audit: BPF prog-id=52 op=LOAD Dec 12 18:27:41.860000 audit: BPF prog-id=29 op=UNLOAD Dec 12 18:27:41.860000 audit: BPF prog-id=30 op=UNLOAD Dec 12 18:27:41.862000 audit: BPF prog-id=53 op=LOAD Dec 12 18:27:41.862000 audit: BPF prog-id=31 op=UNLOAD Dec 12 18:27:41.872778 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:27:41.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:41.886921 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:27:41.891538 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 18:27:41.906930 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 18:27:41.912637 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 18:27:41.913000 audit: BPF prog-id=8 op=UNLOAD Dec 12 18:27:41.913000 audit: BPF prog-id=7 op=UNLOAD Dec 12 18:27:41.914000 audit: BPF prog-id=54 op=LOAD Dec 12 18:27:41.914000 audit: BPF prog-id=55 op=LOAD Dec 12 18:27:41.918037 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:27:41.925415 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 18:27:41.933738 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:27:41.934028 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:27:41.940195 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:27:41.954960 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:27:41.962714 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:27:41.964469 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:27:41.964794 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 18:27:41.964969 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:27:41.965118 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:27:41.971826 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:27:41.972152 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:27:41.973490 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:27:41.973753 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 18:27:41.973923 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:27:41.974064 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:27:41.988000 audit[1528]: SYSTEM_BOOT pid=1528 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 12 18:27:41.980791 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:27:41.981216 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:27:41.983975 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:27:41.985423 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:27:41.985671 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 18:27:41.985817 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:27:41.986013 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:27:41.995743 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 18:27:41.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:42.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:42.009455 systemd[1]: Finished ensure-sysext.service. Dec 12 18:27:42.013000 audit: BPF prog-id=56 op=LOAD Dec 12 18:27:42.015913 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 12 18:27:42.047250 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:27:42.051292 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:27:42.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:42.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:42.054995 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:27:42.056024 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:27:42.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:42.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:42.082296 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:27:42.083262 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:27:42.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:42.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:42.086028 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:27:42.089497 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:27:42.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:42.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:42.093137 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:27:42.094532 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:27:42.110813 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 18:27:42.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:42.125000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 12 18:27:42.125000 audit[1562]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff78bbea90 a2=420 a3=0 items=0 ppid=1523 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:27:42.125000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 18:27:42.125974 augenrules[1562]: No rules Dec 12 18:27:42.130268 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:27:42.130877 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:27:42.150935 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 18:27:42.152756 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 18:27:42.160899 systemd-udevd[1527]: Using default interface naming scheme 'v257'. Dec 12 18:27:42.180336 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 12 18:27:42.181371 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 18:27:42.210064 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:27:42.215487 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:27:42.375085 systemd-networkd[1573]: lo: Link UP Dec 12 18:27:42.376638 systemd-networkd[1573]: lo: Gained carrier Dec 12 18:27:42.384595 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:27:42.391688 systemd[1]: Reached target network.target - Network. Dec 12 18:27:42.396523 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 18:27:42.409191 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 18:27:42.458192 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 18:27:42.518418 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 12 18:27:42.680966 systemd-networkd[1573]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 18:27:42.681362 systemd-networkd[1573]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:27:42.685404 systemd-networkd[1573]: eth0: Link UP Dec 12 18:27:42.686051 systemd-networkd[1573]: eth0: Gained carrier Dec 12 18:27:42.686436 systemd-networkd[1573]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 18:27:42.710674 systemd-networkd[1573]: eth0: DHCPv4 address 10.230.23.66/30, gateway 10.230.23.65 acquired from 10.230.23.65 Dec 12 18:27:42.714199 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Dec 12 18:27:42.714323 systemd-timesyncd[1543]: Network configuration changed, trying to establish connection. Dec 12 18:27:42.720191 kernel: mousedev: PS/2 mouse device common for all mice Dec 12 18:27:42.728204 kernel: ACPI: button: Power Button [PWRF] Dec 12 18:27:42.760017 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 18:27:42.772218 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 18:27:42.823265 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 18:27:42.827080 ldconfig[1525]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 18:27:42.835943 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 18:27:42.840428 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 18:27:42.862190 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 12 18:27:42.869226 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 12 18:27:42.871454 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 18:27:42.882891 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:27:42.885103 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 18:27:42.886379 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 18:27:42.887795 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 12 18:27:42.889028 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 18:27:42.890591 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 18:27:42.892097 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 12 18:27:42.893408 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 12 18:27:42.894503 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 18:27:42.895695 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 18:27:42.895764 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:27:42.896776 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:27:42.899543 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 18:27:42.902928 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 18:27:42.911681 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 18:27:42.916543 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 18:27:42.917389 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 18:27:42.927108 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 18:27:42.929819 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 18:27:42.932658 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 18:27:42.937549 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:27:42.939650 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:27:42.942292 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:27:42.942343 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:27:42.945399 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 18:27:42.951513 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 12 18:27:42.956569 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 18:27:42.963470 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 18:27:42.970046 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 18:27:42.974722 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 18:27:42.976289 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 18:27:42.984910 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 12 18:27:42.992426 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 18:27:43.001286 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 18:27:43.008468 google_oslogin_nss_cache[1626]: oslogin_cache_refresh[1626]: Refreshing passwd entry cache Dec 12 18:27:43.008874 oslogin_cache_refresh[1626]: Refreshing passwd entry cache Dec 12 18:27:43.011501 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 18:27:43.022195 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 18:27:43.026757 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 18:27:43.033192 google_oslogin_nss_cache[1626]: oslogin_cache_refresh[1626]: Failure getting users, quitting Dec 12 18:27:43.033192 google_oslogin_nss_cache[1626]: oslogin_cache_refresh[1626]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:27:43.033192 google_oslogin_nss_cache[1626]: oslogin_cache_refresh[1626]: Refreshing group entry cache Dec 12 18:27:43.031430 oslogin_cache_refresh[1626]: Failure getting users, quitting Dec 12 18:27:43.031473 oslogin_cache_refresh[1626]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:27:43.031566 oslogin_cache_refresh[1626]: Refreshing group entry cache Dec 12 18:27:43.035189 google_oslogin_nss_cache[1626]: oslogin_cache_refresh[1626]: Failure getting groups, quitting Dec 12 18:27:43.035189 google_oslogin_nss_cache[1626]: oslogin_cache_refresh[1626]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:27:43.033875 oslogin_cache_refresh[1626]: Failure getting groups, quitting Dec 12 18:27:43.033892 oslogin_cache_refresh[1626]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:27:43.037196 jq[1624]: false Dec 12 18:27:43.036576 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 18:27:43.037339 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 18:27:43.038048 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 18:27:43.042388 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 18:27:43.047979 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 18:27:43.063233 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 18:27:43.070860 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 18:27:43.071330 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 18:27:43.071802 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 12 18:27:43.073229 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 12 18:27:43.088859 extend-filesystems[1625]: Found /dev/vda6 Dec 12 18:27:43.089375 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 18:27:43.093974 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 18:27:43.116265 extend-filesystems[1625]: Found /dev/vda9 Dec 12 18:27:43.122872 jq[1636]: true Dec 12 18:27:43.131207 extend-filesystems[1625]: Checking size of /dev/vda9 Dec 12 18:27:43.133283 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 18:27:43.133740 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 18:27:43.164801 tar[1645]: linux-amd64/LICENSE Dec 12 18:27:43.164801 tar[1645]: linux-amd64/helm Dec 12 18:27:43.207343 extend-filesystems[1625]: Resized partition /dev/vda9 Dec 12 18:27:43.265693 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 14138363 blocks Dec 12 18:27:43.265876 extend-filesystems[1674]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 18:27:43.277274 jq[1660]: true Dec 12 18:27:43.281230 update_engine[1635]: I20251212 18:27:43.270193 1635 main.cc:92] Flatcar Update Engine starting Dec 12 18:27:43.313490 dbus-daemon[1622]: [system] SELinux support is enabled Dec 12 18:27:43.315177 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 18:27:43.324060 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 18:27:43.324128 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 18:27:43.325647 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 18:27:43.327913 dbus-daemon[1622]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1573 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 12 18:27:43.325684 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 18:27:43.333108 update_engine[1635]: I20251212 18:27:43.333019 1635 update_check_scheduler.cc:74] Next update check in 8m29s Dec 12 18:27:43.346149 systemd[1]: Started update-engine.service - Update Engine. Dec 12 18:27:43.350227 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 18:27:43.357619 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 12 18:27:43.364551 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 18:27:43.420721 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:27:43.514237 bash[1694]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:27:43.507298 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 18:27:43.516413 systemd[1]: Starting sshkeys.service... Dec 12 18:27:43.590819 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 12 18:27:43.598891 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 12 18:27:43.621194 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Dec 12 18:27:43.656573 extend-filesystems[1674]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 12 18:27:43.656573 extend-filesystems[1674]: old_desc_blocks = 1, new_desc_blocks = 7 Dec 12 18:27:43.656573 extend-filesystems[1674]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Dec 12 18:27:43.668966 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 18:27:43.669035 extend-filesystems[1625]: Resized filesystem in /dev/vda9 Dec 12 18:27:43.660954 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 18:27:43.661403 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 18:27:43.928581 containerd[1664]: time="2025-12-12T18:27:43Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 18:27:43.941739 containerd[1664]: time="2025-12-12T18:27:43.936760906Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 12 18:27:43.943794 systemd-logind[1633]: Watching system buttons on /dev/input/event3 (Power Button) Dec 12 18:27:43.943856 systemd-logind[1633]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 12 18:27:43.965461 containerd[1664]: time="2025-12-12T18:27:43.965350796Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="20.513µs" Dec 12 18:27:43.965461 containerd[1664]: time="2025-12-12T18:27:43.965427348Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 18:27:43.965678 containerd[1664]: time="2025-12-12T18:27:43.965511708Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 18:27:43.965678 containerd[1664]: time="2025-12-12T18:27:43.965537544Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 18:27:43.965977 containerd[1664]: time="2025-12-12T18:27:43.965929377Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 18:27:43.966030 containerd[1664]: time="2025-12-12T18:27:43.965989203Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:27:43.966146 containerd[1664]: time="2025-12-12T18:27:43.966111483Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:27:43.966146 containerd[1664]: time="2025-12-12T18:27:43.966140607Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:27:43.980280 locksmithd[1683]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 18:27:43.990231 containerd[1664]: time="2025-12-12T18:27:43.989774665Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:27:43.990231 containerd[1664]: time="2025-12-12T18:27:43.989854852Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:27:43.990231 containerd[1664]: time="2025-12-12T18:27:43.989886165Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:27:43.990231 containerd[1664]: time="2025-12-12T18:27:43.989905671Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 12 18:27:43.991240 containerd[1664]: time="2025-12-12T18:27:43.991023235Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 12 18:27:43.991240 containerd[1664]: time="2025-12-12T18:27:43.991062638Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 18:27:43.994175 containerd[1664]: time="2025-12-12T18:27:43.991432627Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 18:27:43.994175 containerd[1664]: time="2025-12-12T18:27:43.993673373Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:27:43.994175 containerd[1664]: time="2025-12-12T18:27:43.993791477Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:27:43.994175 containerd[1664]: time="2025-12-12T18:27:43.993827637Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 18:27:43.994175 containerd[1664]: time="2025-12-12T18:27:43.993915433Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 18:27:43.996726 containerd[1664]: time="2025-12-12T18:27:43.996355777Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 18:27:43.996726 containerd[1664]: time="2025-12-12T18:27:43.996686235Z" level=info msg="metadata content store policy set" policy=shared Dec 12 18:27:44.010485 containerd[1664]: time="2025-12-12T18:27:44.008637034Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 18:27:44.010485 containerd[1664]: time="2025-12-12T18:27:44.008781633Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 18:27:44.010485 containerd[1664]: time="2025-12-12T18:27:44.008939871Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 18:27:44.010485 containerd[1664]: time="2025-12-12T18:27:44.008964278Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 18:27:44.010485 containerd[1664]: time="2025-12-12T18:27:44.009002691Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 18:27:44.010485 containerd[1664]: time="2025-12-12T18:27:44.009028511Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 18:27:44.010485 containerd[1664]: time="2025-12-12T18:27:44.009055713Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 18:27:44.010485 containerd[1664]: time="2025-12-12T18:27:44.009073451Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 18:27:44.010485 containerd[1664]: time="2025-12-12T18:27:44.009093460Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 18:27:44.010485 containerd[1664]: time="2025-12-12T18:27:44.009122010Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 18:27:44.010485 containerd[1664]: time="2025-12-12T18:27:44.009145431Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 18:27:44.010485 containerd[1664]: time="2025-12-12T18:27:44.009183353Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 18:27:44.010485 containerd[1664]: time="2025-12-12T18:27:44.009203663Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 18:27:44.010485 containerd[1664]: time="2025-12-12T18:27:44.009224161Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 18:27:44.011053 containerd[1664]: time="2025-12-12T18:27:44.009454295Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 18:27:44.011053 containerd[1664]: time="2025-12-12T18:27:44.009501973Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 18:27:44.011053 containerd[1664]: time="2025-12-12T18:27:44.009528153Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 18:27:44.011053 containerd[1664]: time="2025-12-12T18:27:44.009547328Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 18:27:44.011053 containerd[1664]: time="2025-12-12T18:27:44.009579776Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 18:27:44.011053 containerd[1664]: time="2025-12-12T18:27:44.009613063Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 18:27:44.011053 containerd[1664]: time="2025-12-12T18:27:44.009640881Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 18:27:44.011053 containerd[1664]: time="2025-12-12T18:27:44.009667180Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 18:27:44.011053 containerd[1664]: time="2025-12-12T18:27:44.009712368Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 18:27:44.011053 containerd[1664]: time="2025-12-12T18:27:44.009736158Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 18:27:44.011053 containerd[1664]: time="2025-12-12T18:27:44.009754973Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 18:27:44.011483 containerd[1664]: time="2025-12-12T18:27:44.011229975Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 18:27:44.011483 containerd[1664]: time="2025-12-12T18:27:44.011337132Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 18:27:44.011483 containerd[1664]: time="2025-12-12T18:27:44.011379888Z" level=info msg="Start snapshots syncer" Dec 12 18:27:44.011483 containerd[1664]: time="2025-12-12T18:27:44.011441720Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 18:27:44.014696 containerd[1664]: time="2025-12-12T18:27:44.011905795Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 18:27:44.014696 containerd[1664]: time="2025-12-12T18:27:44.012005601Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 18:27:44.015035 containerd[1664]: time="2025-12-12T18:27:44.012153882Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 18:27:44.015035 containerd[1664]: time="2025-12-12T18:27:44.014089642Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 18:27:44.015035 containerd[1664]: time="2025-12-12T18:27:44.014126664Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 18:27:44.015035 containerd[1664]: time="2025-12-12T18:27:44.014149454Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 18:27:44.015035 containerd[1664]: time="2025-12-12T18:27:44.014219282Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 18:27:44.015035 containerd[1664]: time="2025-12-12T18:27:44.014256287Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 18:27:44.015035 containerd[1664]: time="2025-12-12T18:27:44.014279572Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 18:27:44.015035 containerd[1664]: time="2025-12-12T18:27:44.014298473Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 18:27:44.015035 containerd[1664]: time="2025-12-12T18:27:44.014317155Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 18:27:44.015035 containerd[1664]: time="2025-12-12T18:27:44.014346535Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 18:27:44.015035 containerd[1664]: time="2025-12-12T18:27:44.014412306Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:27:44.015035 containerd[1664]: time="2025-12-12T18:27:44.014442356Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:27:44.015035 containerd[1664]: time="2025-12-12T18:27:44.014466796Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:27:44.017943 containerd[1664]: time="2025-12-12T18:27:44.014487297Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:27:44.017943 containerd[1664]: time="2025-12-12T18:27:44.014502683Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 18:27:44.017943 containerd[1664]: time="2025-12-12T18:27:44.014519618Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 18:27:44.017943 containerd[1664]: time="2025-12-12T18:27:44.014543570Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 18:27:44.017943 containerd[1664]: time="2025-12-12T18:27:44.014573699Z" level=info msg="runtime interface created" Dec 12 18:27:44.017943 containerd[1664]: time="2025-12-12T18:27:44.014590944Z" level=info msg="created NRI interface" Dec 12 18:27:44.017943 containerd[1664]: time="2025-12-12T18:27:44.014605806Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 18:27:44.017943 containerd[1664]: time="2025-12-12T18:27:44.014629730Z" level=info msg="Connect containerd service" Dec 12 18:27:44.017943 containerd[1664]: time="2025-12-12T18:27:44.014667723Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 18:27:44.017943 containerd[1664]: time="2025-12-12T18:27:44.016031713Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:27:44.029061 systemd-logind[1633]: New seat seat0. Dec 12 18:27:44.175383 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 18:27:44.185358 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:27:44.345450 containerd[1664]: time="2025-12-12T18:27:44.345227919Z" level=info msg="Start subscribing containerd event" Dec 12 18:27:44.345450 containerd[1664]: time="2025-12-12T18:27:44.345326137Z" level=info msg="Start recovering state" Dec 12 18:27:44.346097 containerd[1664]: time="2025-12-12T18:27:44.346069154Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 18:27:44.348450 containerd[1664]: time="2025-12-12T18:27:44.346933687Z" level=info msg="Start event monitor" Dec 12 18:27:44.348450 containerd[1664]: time="2025-12-12T18:27:44.347213679Z" level=info msg="Start cni network conf syncer for default" Dec 12 18:27:44.348450 containerd[1664]: time="2025-12-12T18:27:44.347233914Z" level=info msg="Start streaming server" Dec 12 18:27:44.348450 containerd[1664]: time="2025-12-12T18:27:44.347248975Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 18:27:44.348450 containerd[1664]: time="2025-12-12T18:27:44.347261969Z" level=info msg="runtime interface starting up..." Dec 12 18:27:44.348450 containerd[1664]: time="2025-12-12T18:27:44.347272462Z" level=info msg="starting plugins..." Dec 12 18:27:44.348450 containerd[1664]: time="2025-12-12T18:27:44.347298924Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 18:27:44.348855 containerd[1664]: time="2025-12-12T18:27:44.348825660Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 18:27:44.349050 containerd[1664]: time="2025-12-12T18:27:44.349026143Z" level=info msg="containerd successfully booted in 0.426891s" Dec 12 18:27:44.349377 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 18:27:44.431940 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 12 18:27:44.443685 dbus-daemon[1622]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 12 18:27:44.453090 dbus-daemon[1622]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.9' (uid=0 pid=1682 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 12 18:27:44.465041 systemd[1]: Starting polkit.service - Authorization Manager... Dec 12 18:27:44.607524 polkitd[1734]: Started polkitd version 126 Dec 12 18:27:44.618717 polkitd[1734]: Loading rules from directory /etc/polkit-1/rules.d Dec 12 18:27:44.626215 polkitd[1734]: Loading rules from directory /run/polkit-1/rules.d Dec 12 18:27:44.626315 polkitd[1734]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 12 18:27:44.626672 polkitd[1734]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 12 18:27:44.626716 polkitd[1734]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 12 18:27:44.626784 polkitd[1734]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 12 18:27:44.630233 polkitd[1734]: Finished loading, compiling and executing 2 rules Dec 12 18:27:44.631054 systemd[1]: Started polkit.service - Authorization Manager. Dec 12 18:27:44.634186 dbus-daemon[1622]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 12 18:27:44.635024 polkitd[1734]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 12 18:27:44.660311 systemd-hostnamed[1682]: Hostname set to (static) Dec 12 18:27:44.732498 systemd-networkd[1573]: eth0: Gained IPv6LL Dec 12 18:27:44.734390 systemd-timesyncd[1543]: Network configuration changed, trying to establish connection. Dec 12 18:27:44.739508 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 18:27:44.741951 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 18:27:44.748289 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:27:44.752370 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 18:27:44.827592 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 18:27:44.860522 tar[1645]: linux-amd64/README.md Dec 12 18:27:44.880008 sshd_keygen[1663]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 18:27:44.896539 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 18:27:44.919350 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 18:27:44.925602 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 18:27:44.929607 systemd[1]: Started sshd@0-10.230.23.66:22-139.178.89.65:54878.service - OpenSSH per-connection server daemon (139.178.89.65:54878). Dec 12 18:27:44.962221 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 18:27:44.963920 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 18:27:44.975846 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 18:27:45.010707 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 18:27:45.026927 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 18:27:45.032363 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 12 18:27:45.033591 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 18:27:45.407326 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 18:27:45.414752 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 18:27:45.755822 sshd[1768]: Accepted publickey for core from 139.178.89.65 port 54878 ssh2: RSA SHA256:jo3Cp94RWwUYPMISUA0rnCA96kDhp7AbC5KdtynibHU Dec 12 18:27:45.759575 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:27:45.776939 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 18:27:45.787595 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 18:27:45.801705 systemd-logind[1633]: New session 1 of user core. Dec 12 18:27:45.842281 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 18:27:45.860120 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 18:27:45.877960 (systemd)[1784]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 18:27:45.880126 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:27:45.888013 systemd-logind[1633]: New session c1 of user core. Dec 12 18:27:45.893639 (kubelet)[1788]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:27:45.987868 systemd-timesyncd[1543]: Network configuration changed, trying to establish connection. Dec 12 18:27:45.990285 systemd-networkd[1573]: eth0: Ignoring DHCPv6 address 2a02:1348:179:85d0:24:19ff:fee6:1742/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:85d0:24:19ff:fee6:1742/64 assigned by NDisc. Dec 12 18:27:45.990297 systemd-networkd[1573]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 12 18:27:46.085095 systemd[1784]: Queued start job for default target default.target. Dec 12 18:27:46.092659 systemd[1784]: Created slice app.slice - User Application Slice. Dec 12 18:27:46.092722 systemd[1784]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 12 18:27:46.092765 systemd[1784]: Reached target paths.target - Paths. Dec 12 18:27:46.093038 systemd[1784]: Reached target timers.target - Timers. Dec 12 18:27:46.097301 systemd[1784]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 18:27:46.099321 systemd[1784]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 12 18:27:46.121215 systemd[1784]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 12 18:27:46.128386 systemd[1784]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 18:27:46.128747 systemd[1784]: Reached target sockets.target - Sockets. Dec 12 18:27:46.128987 systemd[1784]: Reached target basic.target - Basic System. Dec 12 18:27:46.129231 systemd[1784]: Reached target default.target - Main User Target. Dec 12 18:27:46.129507 systemd[1784]: Startup finished in 229ms. Dec 12 18:27:46.129643 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 18:27:46.142527 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 18:27:46.529815 kubelet[1788]: E1212 18:27:46.529725 1788 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:27:46.533342 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:27:46.533606 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:27:46.534596 systemd[1]: kubelet.service: Consumed 1.088s CPU time, 263.1M memory peak. Dec 12 18:27:46.610886 systemd[1]: Started sshd@1-10.230.23.66:22-139.178.89.65:54880.service - OpenSSH per-connection server daemon (139.178.89.65:54880). Dec 12 18:27:47.229194 systemd-timesyncd[1543]: Network configuration changed, trying to establish connection. Dec 12 18:27:47.406702 sshd[1808]: Accepted publickey for core from 139.178.89.65 port 54880 ssh2: RSA SHA256:jo3Cp94RWwUYPMISUA0rnCA96kDhp7AbC5KdtynibHU Dec 12 18:27:47.408698 sshd-session[1808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:27:47.419402 systemd-logind[1633]: New session 2 of user core. Dec 12 18:27:47.421179 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 18:27:47.425452 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 18:27:47.440201 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 18:27:47.851691 sshd[1812]: Connection closed by 139.178.89.65 port 54880 Dec 12 18:27:47.852877 sshd-session[1808]: pam_unix(sshd:session): session closed for user core Dec 12 18:27:47.859788 systemd[1]: sshd@1-10.230.23.66:22-139.178.89.65:54880.service: Deactivated successfully. Dec 12 18:27:47.862316 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 18:27:47.863982 systemd-logind[1633]: Session 2 logged out. Waiting for processes to exit. Dec 12 18:27:47.866070 systemd-logind[1633]: Removed session 2. Dec 12 18:27:48.018180 systemd[1]: Started sshd@2-10.230.23.66:22-139.178.89.65:54894.service - OpenSSH per-connection server daemon (139.178.89.65:54894). Dec 12 18:27:48.802661 sshd[1819]: Accepted publickey for core from 139.178.89.65 port 54894 ssh2: RSA SHA256:jo3Cp94RWwUYPMISUA0rnCA96kDhp7AbC5KdtynibHU Dec 12 18:27:48.804572 sshd-session[1819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:27:48.812734 systemd-logind[1633]: New session 3 of user core. Dec 12 18:27:48.835108 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 18:27:49.245235 sshd[1822]: Connection closed by 139.178.89.65 port 54894 Dec 12 18:27:49.246228 sshd-session[1819]: pam_unix(sshd:session): session closed for user core Dec 12 18:27:49.252527 systemd[1]: sshd@2-10.230.23.66:22-139.178.89.65:54894.service: Deactivated successfully. Dec 12 18:27:49.255396 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 18:27:49.257248 systemd-logind[1633]: Session 3 logged out. Waiting for processes to exit. Dec 12 18:27:49.259837 systemd-logind[1633]: Removed session 3. Dec 12 18:27:50.284423 login[1776]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 12 18:27:50.293043 systemd-logind[1633]: New session 4 of user core. Dec 12 18:27:50.302578 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 18:27:50.461462 login[1777]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 12 18:27:50.471908 systemd-logind[1633]: New session 5 of user core. Dec 12 18:27:50.478630 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 18:27:51.444216 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 18:27:51.454703 coreos-metadata[1621]: Dec 12 18:27:51.454 WARN failed to locate config-drive, using the metadata service API instead Dec 12 18:27:51.459210 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 18:27:51.471648 coreos-metadata[1705]: Dec 12 18:27:51.471 WARN failed to locate config-drive, using the metadata service API instead Dec 12 18:27:51.484753 coreos-metadata[1621]: Dec 12 18:27:51.484 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Dec 12 18:27:51.494143 coreos-metadata[1705]: Dec 12 18:27:51.494 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 12 18:27:51.495115 coreos-metadata[1621]: Dec 12 18:27:51.495 INFO Fetch failed with 404: resource not found Dec 12 18:27:51.495115 coreos-metadata[1621]: Dec 12 18:27:51.495 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 12 18:27:51.495627 coreos-metadata[1621]: Dec 12 18:27:51.495 INFO Fetch successful Dec 12 18:27:51.495738 coreos-metadata[1621]: Dec 12 18:27:51.495 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 12 18:27:51.512354 coreos-metadata[1621]: Dec 12 18:27:51.512 INFO Fetch successful Dec 12 18:27:51.512354 coreos-metadata[1621]: Dec 12 18:27:51.512 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 12 18:27:51.513548 coreos-metadata[1705]: Dec 12 18:27:51.513 INFO Fetch successful Dec 12 18:27:51.513769 coreos-metadata[1705]: Dec 12 18:27:51.513 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 12 18:27:51.526094 coreos-metadata[1621]: Dec 12 18:27:51.525 INFO Fetch successful Dec 12 18:27:51.526094 coreos-metadata[1621]: Dec 12 18:27:51.526 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 12 18:27:51.537451 coreos-metadata[1621]: Dec 12 18:27:51.537 INFO Fetch successful Dec 12 18:27:51.537451 coreos-metadata[1621]: Dec 12 18:27:51.537 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 12 18:27:51.543482 coreos-metadata[1705]: Dec 12 18:27:51.543 INFO Fetch successful Dec 12 18:27:51.547912 unknown[1705]: wrote ssh authorized keys file for user: core Dec 12 18:27:51.555776 coreos-metadata[1621]: Dec 12 18:27:51.555 INFO Fetch successful Dec 12 18:27:51.577355 update-ssh-keys[1857]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:27:51.578398 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 12 18:27:51.582079 systemd[1]: Finished sshkeys.service. Dec 12 18:27:51.589742 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 12 18:27:51.591586 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 18:27:51.591951 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 18:27:51.592576 systemd[1]: Startup finished in 3.481s (kernel) + 15.992s (initrd) + 12.821s (userspace) = 32.295s. Dec 12 18:27:56.727118 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 18:27:56.729900 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:27:56.943768 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:27:56.955917 (kubelet)[1874]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:27:57.024363 kubelet[1874]: E1212 18:27:57.024104 1874 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:27:57.028712 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:27:57.028979 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:27:57.029962 systemd[1]: kubelet.service: Consumed 259ms CPU time, 108.6M memory peak. Dec 12 18:27:59.406817 systemd[1]: Started sshd@3-10.230.23.66:22-139.178.89.65:54578.service - OpenSSH per-connection server daemon (139.178.89.65:54578). Dec 12 18:28:00.202245 sshd[1881]: Accepted publickey for core from 139.178.89.65 port 54578 ssh2: RSA SHA256:jo3Cp94RWwUYPMISUA0rnCA96kDhp7AbC5KdtynibHU Dec 12 18:28:00.204030 sshd-session[1881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:28:00.212986 systemd-logind[1633]: New session 6 of user core. Dec 12 18:28:00.222587 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 18:28:00.644205 sshd[1884]: Connection closed by 139.178.89.65 port 54578 Dec 12 18:28:00.644799 sshd-session[1881]: pam_unix(sshd:session): session closed for user core Dec 12 18:28:00.651398 systemd[1]: sshd@3-10.230.23.66:22-139.178.89.65:54578.service: Deactivated successfully. Dec 12 18:28:00.653815 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 18:28:00.655560 systemd-logind[1633]: Session 6 logged out. Waiting for processes to exit. Dec 12 18:28:00.658547 systemd-logind[1633]: Removed session 6. Dec 12 18:28:00.803561 systemd[1]: Started sshd@4-10.230.23.66:22-139.178.89.65:58132.service - OpenSSH per-connection server daemon (139.178.89.65:58132). Dec 12 18:28:01.587765 sshd[1890]: Accepted publickey for core from 139.178.89.65 port 58132 ssh2: RSA SHA256:jo3Cp94RWwUYPMISUA0rnCA96kDhp7AbC5KdtynibHU Dec 12 18:28:01.589653 sshd-session[1890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:28:01.599235 systemd-logind[1633]: New session 7 of user core. Dec 12 18:28:01.608533 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 18:28:02.029191 sshd[1893]: Connection closed by 139.178.89.65 port 58132 Dec 12 18:28:02.031296 sshd-session[1890]: pam_unix(sshd:session): session closed for user core Dec 12 18:28:02.037121 systemd[1]: sshd@4-10.230.23.66:22-139.178.89.65:58132.service: Deactivated successfully. Dec 12 18:28:02.039899 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 18:28:02.042593 systemd-logind[1633]: Session 7 logged out. Waiting for processes to exit. Dec 12 18:28:02.044791 systemd-logind[1633]: Removed session 7. Dec 12 18:28:02.185825 systemd[1]: Started sshd@5-10.230.23.66:22-139.178.89.65:58138.service - OpenSSH per-connection server daemon (139.178.89.65:58138). Dec 12 18:28:02.979898 sshd[1899]: Accepted publickey for core from 139.178.89.65 port 58138 ssh2: RSA SHA256:jo3Cp94RWwUYPMISUA0rnCA96kDhp7AbC5KdtynibHU Dec 12 18:28:02.982011 sshd-session[1899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:28:02.991697 systemd-logind[1633]: New session 8 of user core. Dec 12 18:28:03.001454 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 18:28:03.422883 sshd[1902]: Connection closed by 139.178.89.65 port 58138 Dec 12 18:28:03.424053 sshd-session[1899]: pam_unix(sshd:session): session closed for user core Dec 12 18:28:03.430232 systemd[1]: sshd@5-10.230.23.66:22-139.178.89.65:58138.service: Deactivated successfully. Dec 12 18:28:03.433145 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 18:28:03.435001 systemd-logind[1633]: Session 8 logged out. Waiting for processes to exit. Dec 12 18:28:03.436888 systemd-logind[1633]: Removed session 8. Dec 12 18:28:03.580806 systemd[1]: Started sshd@6-10.230.23.66:22-139.178.89.65:58152.service - OpenSSH per-connection server daemon (139.178.89.65:58152). Dec 12 18:28:04.374301 sshd[1908]: Accepted publickey for core from 139.178.89.65 port 58152 ssh2: RSA SHA256:jo3Cp94RWwUYPMISUA0rnCA96kDhp7AbC5KdtynibHU Dec 12 18:28:04.376243 sshd-session[1908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:28:04.384877 systemd-logind[1633]: New session 9 of user core. Dec 12 18:28:04.391597 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 18:28:04.691044 sudo[1912]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 18:28:04.691536 sudo[1912]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:28:04.710146 sudo[1912]: pam_unix(sudo:session): session closed for user root Dec 12 18:28:04.857189 sshd[1911]: Connection closed by 139.178.89.65 port 58152 Dec 12 18:28:04.856237 sshd-session[1908]: pam_unix(sshd:session): session closed for user core Dec 12 18:28:04.862784 systemd[1]: sshd@6-10.230.23.66:22-139.178.89.65:58152.service: Deactivated successfully. Dec 12 18:28:04.865152 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 18:28:04.867033 systemd-logind[1633]: Session 9 logged out. Waiting for processes to exit. Dec 12 18:28:04.869244 systemd-logind[1633]: Removed session 9. Dec 12 18:28:05.016993 systemd[1]: Started sshd@7-10.230.23.66:22-139.178.89.65:58158.service - OpenSSH per-connection server daemon (139.178.89.65:58158). Dec 12 18:28:05.805737 sshd[1918]: Accepted publickey for core from 139.178.89.65 port 58158 ssh2: RSA SHA256:jo3Cp94RWwUYPMISUA0rnCA96kDhp7AbC5KdtynibHU Dec 12 18:28:05.807699 sshd-session[1918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:28:05.814928 systemd-logind[1633]: New session 10 of user core. Dec 12 18:28:05.831464 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 18:28:06.108030 sudo[1923]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 18:28:06.108535 sudo[1923]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:28:06.116844 sudo[1923]: pam_unix(sudo:session): session closed for user root Dec 12 18:28:06.128371 sudo[1922]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 18:28:06.128843 sudo[1922]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:28:06.145972 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:28:06.201000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 18:28:06.204903 kernel: kauditd_printk_skb: 142 callbacks suppressed Dec 12 18:28:06.205026 kernel: audit: type=1305 audit(1765564086.201:222): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 18:28:06.201000 audit[1945]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe3fd98f00 a2=420 a3=0 items=0 ppid=1926 pid=1945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:06.209321 augenrules[1945]: No rules Dec 12 18:28:06.210351 kernel: audit: type=1300 audit(1765564086.201:222): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe3fd98f00 a2=420 a3=0 items=0 ppid=1926 pid=1945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:06.201000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 18:28:06.215051 kernel: audit: type=1327 audit(1765564086.201:222): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 18:28:06.215936 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:28:06.216759 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:28:06.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:06.222187 kernel: audit: type=1130 audit(1765564086.217:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:06.222726 sudo[1922]: pam_unix(sudo:session): session closed for user root Dec 12 18:28:06.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:06.227257 kernel: audit: type=1131 audit(1765564086.218:224): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:06.227370 kernel: audit: type=1106 audit(1765564086.222:225): pid=1922 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:28:06.222000 audit[1922]: USER_END pid=1922 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:28:06.222000 audit[1922]: CRED_DISP pid=1922 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:28:06.233272 kernel: audit: type=1104 audit(1765564086.222:226): pid=1922 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:28:06.373416 sshd[1921]: Connection closed by 139.178.89.65 port 58158 Dec 12 18:28:06.374534 sshd-session[1918]: pam_unix(sshd:session): session closed for user core Dec 12 18:28:06.377000 audit[1918]: USER_END pid=1918 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:06.384200 kernel: audit: type=1106 audit(1765564086.377:227): pid=1918 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:06.384184 systemd[1]: sshd@7-10.230.23.66:22-139.178.89.65:58158.service: Deactivated successfully. Dec 12 18:28:06.377000 audit[1918]: CRED_DISP pid=1918 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:06.387274 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 18:28:06.389356 systemd-logind[1633]: Session 10 logged out. Waiting for processes to exit. Dec 12 18:28:06.390312 kernel: audit: type=1104 audit(1765564086.377:228): pid=1918 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:06.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.230.23.66:22-139.178.89.65:58158 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:06.392799 systemd-logind[1633]: Removed session 10. Dec 12 18:28:06.395294 kernel: audit: type=1131 audit(1765564086.384:229): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.230.23.66:22-139.178.89.65:58158 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:06.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.23.66:22-139.178.89.65:58174 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:06.530522 systemd[1]: Started sshd@8-10.230.23.66:22-139.178.89.65:58174.service - OpenSSH per-connection server daemon (139.178.89.65:58174). Dec 12 18:28:07.159775 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 12 18:28:07.163412 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:28:07.316000 audit[1954]: USER_ACCT pid=1954 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:07.317108 sshd[1954]: Accepted publickey for core from 139.178.89.65 port 58174 ssh2: RSA SHA256:jo3Cp94RWwUYPMISUA0rnCA96kDhp7AbC5KdtynibHU Dec 12 18:28:07.318000 audit[1954]: CRED_ACQ pid=1954 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:07.318000 audit[1954]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd11f5a470 a2=3 a3=0 items=0 ppid=1 pid=1954 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:07.318000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:28:07.319029 sshd-session[1954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:28:07.329775 systemd-logind[1633]: New session 11 of user core. Dec 12 18:28:07.336470 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 18:28:07.342000 audit[1954]: USER_START pid=1954 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:07.345000 audit[1960]: CRED_ACQ pid=1960 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:07.391042 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:28:07.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:07.404972 (kubelet)[1966]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:28:07.473001 kubelet[1966]: E1212 18:28:07.472788 1966 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:28:07.476498 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:28:07.476987 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:28:07.478122 systemd[1]: kubelet.service: Consumed 233ms CPU time, 109.7M memory peak. Dec 12 18:28:07.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 18:28:07.619000 audit[1973]: USER_ACCT pid=1973 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:28:07.619935 sudo[1973]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 18:28:07.620000 audit[1973]: CRED_REFR pid=1973 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:28:07.620934 sudo[1973]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:28:07.624000 audit[1973]: USER_START pid=1973 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:28:08.139728 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 18:28:08.159778 (dockerd)[1991]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 18:28:08.543190 dockerd[1991]: time="2025-12-12T18:28:08.543019809Z" level=info msg="Starting up" Dec 12 18:28:08.545276 dockerd[1991]: time="2025-12-12T18:28:08.545239022Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 18:28:08.563280 dockerd[1991]: time="2025-12-12T18:28:08.563181655Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 18:28:08.628699 dockerd[1991]: time="2025-12-12T18:28:08.628637186Z" level=info msg="Loading containers: start." Dec 12 18:28:08.647843 kernel: Initializing XFRM netlink socket Dec 12 18:28:08.739000 audit[2042]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:08.739000 audit[2042]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffdb95b1710 a2=0 a3=0 items=0 ppid=1991 pid=2042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:08.739000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 18:28:08.744000 audit[2044]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:08.744000 audit[2044]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe0f2ad7d0 a2=0 a3=0 items=0 ppid=1991 pid=2044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:08.744000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 18:28:08.747000 audit[2046]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2046 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:08.747000 audit[2046]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffec75caff0 a2=0 a3=0 items=0 ppid=1991 pid=2046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:08.747000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 18:28:08.750000 audit[2048]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:08.750000 audit[2048]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff63b3bd50 a2=0 a3=0 items=0 ppid=1991 pid=2048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:08.750000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 18:28:08.753000 audit[2050]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2050 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:08.753000 audit[2050]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd45dcf250 a2=0 a3=0 items=0 ppid=1991 pid=2050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:08.753000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 18:28:08.756000 audit[2052]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2052 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:08.756000 audit[2052]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcfe0a8170 a2=0 a3=0 items=0 ppid=1991 pid=2052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:08.756000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 18:28:08.760000 audit[2054]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:08.760000 audit[2054]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe922dbc60 a2=0 a3=0 items=0 ppid=1991 pid=2054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:08.760000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 18:28:08.763000 audit[2056]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:08.763000 audit[2056]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff0ea3c660 a2=0 a3=0 items=0 ppid=1991 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:08.763000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 18:28:08.813000 audit[2059]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:08.813000 audit[2059]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffc34edbef0 a2=0 a3=0 items=0 ppid=1991 pid=2059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:08.813000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 12 18:28:08.816000 audit[2061]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2061 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:08.816000 audit[2061]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe58895d00 a2=0 a3=0 items=0 ppid=1991 pid=2061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:08.816000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 18:28:08.821000 audit[2063]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2063 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:08.821000 audit[2063]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffd229c7870 a2=0 a3=0 items=0 ppid=1991 pid=2063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:08.821000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 18:28:08.824000 audit[2065]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2065 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:08.824000 audit[2065]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd55c27dc0 a2=0 a3=0 items=0 ppid=1991 pid=2065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:08.824000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 18:28:08.827000 audit[2067]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2067 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:08.827000 audit[2067]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fffe8e34220 a2=0 a3=0 items=0 ppid=1991 pid=2067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:08.827000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 18:28:08.886000 audit[2097]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2097 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:08.886000 audit[2097]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc82de30d0 a2=0 a3=0 items=0 ppid=1991 pid=2097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:08.886000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 18:28:08.889000 audit[2099]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2099 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:08.889000 audit[2099]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe3d1f89b0 a2=0 a3=0 items=0 ppid=1991 pid=2099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:08.889000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 18:28:08.892000 audit[2101]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2101 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:08.892000 audit[2101]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff60a1c3e0 a2=0 a3=0 items=0 ppid=1991 pid=2101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:08.892000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 18:28:08.896000 audit[2103]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2103 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:08.896000 audit[2103]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc6c7cb4a0 a2=0 a3=0 items=0 ppid=1991 pid=2103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:08.896000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 18:28:08.899000 audit[2105]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:08.899000 audit[2105]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffbdf2b6e0 a2=0 a3=0 items=0 ppid=1991 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:08.899000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 18:28:08.902000 audit[2107]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2107 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:08.902000 audit[2107]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcfe0134a0 a2=0 a3=0 items=0 ppid=1991 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:08.902000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 18:28:08.906000 audit[2109]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2109 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:08.906000 audit[2109]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc708cba60 a2=0 a3=0 items=0 ppid=1991 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:08.906000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 18:28:08.909000 audit[2111]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2111 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:08.909000 audit[2111]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffde8cdfcd0 a2=0 a3=0 items=0 ppid=1991 pid=2111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:08.909000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 18:28:08.913000 audit[2113]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2113 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:08.913000 audit[2113]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffd63beaf40 a2=0 a3=0 items=0 ppid=1991 pid=2113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:08.913000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 12 18:28:08.916000 audit[2115]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2115 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:08.916000 audit[2115]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc67dc5730 a2=0 a3=0 items=0 ppid=1991 pid=2115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:08.916000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 18:28:08.920000 audit[2117]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2117 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:08.920000 audit[2117]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffd07b122b0 a2=0 a3=0 items=0 ppid=1991 pid=2117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:08.920000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 18:28:08.923000 audit[2119]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2119 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:08.923000 audit[2119]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fff01d69b10 a2=0 a3=0 items=0 ppid=1991 pid=2119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:08.923000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 18:28:08.927000 audit[2121]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2121 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:08.927000 audit[2121]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffdbbafae60 a2=0 a3=0 items=0 ppid=1991 pid=2121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:08.927000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 18:28:08.935000 audit[2126]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2126 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:08.935000 audit[2126]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdb16c76e0 a2=0 a3=0 items=0 ppid=1991 pid=2126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:08.935000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 18:28:08.938000 audit[2128]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2128 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:08.938000 audit[2128]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff4dfe7ab0 a2=0 a3=0 items=0 ppid=1991 pid=2128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:08.938000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 18:28:08.941000 audit[2130]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2130 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:08.941000 audit[2130]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe5ba32ec0 a2=0 a3=0 items=0 ppid=1991 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:08.941000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 18:28:08.945000 audit[2132]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2132 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:08.945000 audit[2132]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd7de25e70 a2=0 a3=0 items=0 ppid=1991 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:08.945000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 18:28:08.949000 audit[2134]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2134 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:08.949000 audit[2134]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe1c83ce10 a2=0 a3=0 items=0 ppid=1991 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:08.949000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 18:28:08.952000 audit[2136]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2136 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:08.952000 audit[2136]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffd30b77e0 a2=0 a3=0 items=0 ppid=1991 pid=2136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:08.952000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 18:28:08.964487 systemd-timesyncd[1543]: Network configuration changed, trying to establish connection. Dec 12 18:28:08.971000 audit[2140]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2140 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:08.971000 audit[2140]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7fffda5bb820 a2=0 a3=0 items=0 ppid=1991 pid=2140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:08.971000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 12 18:28:08.974000 audit[2142]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2142 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:08.974000 audit[2142]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd248e55a0 a2=0 a3=0 items=0 ppid=1991 pid=2142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:08.974000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 12 18:28:08.990000 audit[2150]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2150 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:08.990000 audit[2150]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7fffdd1f5cc0 a2=0 a3=0 items=0 ppid=1991 pid=2150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:08.990000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 12 18:28:09.004000 audit[2156]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2156 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:09.004000 audit[2156]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc84714ef0 a2=0 a3=0 items=0 ppid=1991 pid=2156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:09.004000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 12 18:28:09.008000 audit[2158]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2158 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:09.008000 audit[2158]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7fffb3dc3790 a2=0 a3=0 items=0 ppid=1991 pid=2158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:09.008000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 12 18:28:09.011000 audit[2160]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2160 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:09.011000 audit[2160]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff85f64030 a2=0 a3=0 items=0 ppid=1991 pid=2160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:09.011000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 12 18:28:09.014000 audit[2162]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2162 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:09.014000 audit[2162]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffe52816870 a2=0 a3=0 items=0 ppid=1991 pid=2162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:09.014000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 18:28:09.018000 audit[2164]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2164 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:09.018000 audit[2164]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe438e2d50 a2=0 a3=0 items=0 ppid=1991 pid=2164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:09.018000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 12 18:28:09.019428 systemd-networkd[1573]: docker0: Link UP Dec 12 18:28:09.031745 dockerd[1991]: time="2025-12-12T18:28:09.031681931Z" level=info msg="Loading containers: done." Dec 12 18:28:09.052272 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3347798422-merged.mount: Deactivated successfully. Dec 12 18:28:09.067283 dockerd[1991]: time="2025-12-12T18:28:09.067113720Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 18:28:09.067495 dockerd[1991]: time="2025-12-12T18:28:09.067297118Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 18:28:09.067495 dockerd[1991]: time="2025-12-12T18:28:09.067468854Z" level=info msg="Initializing buildkit" Dec 12 18:28:09.112746 dockerd[1991]: time="2025-12-12T18:28:09.112330906Z" level=info msg="Completed buildkit initialization" Dec 12 18:28:09.128273 dockerd[1991]: time="2025-12-12T18:28:09.128128404Z" level=info msg="Daemon has completed initialization" Dec 12 18:28:09.129598 dockerd[1991]: time="2025-12-12T18:28:09.128451871Z" level=info msg="API listen on /run/docker.sock" Dec 12 18:28:09.128785 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 18:28:09.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:09.289480 systemd-timesyncd[1543]: Contacted time server [2a01:7e00::f03c:93ff:fe0e:ba3]:123 (2.flatcar.pool.ntp.org). Dec 12 18:28:09.290061 systemd-timesyncd[1543]: Initial clock synchronization to Fri 2025-12-12 18:28:09.342761 UTC. Dec 12 18:28:10.386253 containerd[1664]: time="2025-12-12T18:28:10.386058204Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 12 18:28:11.436020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount792264422.mount: Deactivated successfully. Dec 12 18:28:13.450418 containerd[1664]: time="2025-12-12T18:28:13.450333991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:28:13.452181 containerd[1664]: time="2025-12-12T18:28:13.452120341Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=27403437" Dec 12 18:28:13.452648 containerd[1664]: time="2025-12-12T18:28:13.452614123Z" level=info msg="ImageCreate event name:\"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:28:13.457209 containerd[1664]: time="2025-12-12T18:28:13.457143874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:28:13.459761 containerd[1664]: time="2025-12-12T18:28:13.459705870Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"29068782\" in 3.073455071s" Dec 12 18:28:13.459837 containerd[1664]: time="2025-12-12T18:28:13.459770641Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\"" Dec 12 18:28:13.461699 containerd[1664]: time="2025-12-12T18:28:13.461570899Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 12 18:28:15.950018 containerd[1664]: time="2025-12-12T18:28:15.949943731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:28:15.951752 containerd[1664]: time="2025-12-12T18:28:15.951466519Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=24983855" Dec 12 18:28:15.952538 containerd[1664]: time="2025-12-12T18:28:15.952500613Z" level=info msg="ImageCreate event name:\"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:28:15.957463 containerd[1664]: time="2025-12-12T18:28:15.957404886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:28:15.959311 containerd[1664]: time="2025-12-12T18:28:15.958840506Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"26649046\" in 2.497184562s" Dec 12 18:28:15.959484 containerd[1664]: time="2025-12-12T18:28:15.959451002Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\"" Dec 12 18:28:15.963219 containerd[1664]: time="2025-12-12T18:28:15.962769231Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 12 18:28:16.018797 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 12 18:28:16.028319 kernel: kauditd_printk_skb: 134 callbacks suppressed Dec 12 18:28:16.028460 kernel: audit: type=1131 audit(1765564096.018:282): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:16.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:16.042000 audit: BPF prog-id=61 op=UNLOAD Dec 12 18:28:16.045215 kernel: audit: type=1334 audit(1765564096.042:283): prog-id=61 op=UNLOAD Dec 12 18:28:17.727191 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 12 18:28:17.731447 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:28:18.164678 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:28:18.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:18.172225 kernel: audit: type=1130 audit(1765564098.163:284): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:18.185721 (kubelet)[2281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:28:18.280151 kubelet[2281]: E1212 18:28:18.280020 2281 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:28:18.286461 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:28:18.286810 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:28:18.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 18:28:18.287980 systemd[1]: kubelet.service: Consumed 265ms CPU time, 108.2M memory peak. Dec 12 18:28:18.292274 kernel: audit: type=1131 audit(1765564098.286:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 18:28:18.320301 containerd[1664]: time="2025-12-12T18:28:18.320224611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:28:18.322856 containerd[1664]: time="2025-12-12T18:28:18.322812154Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=19396111" Dec 12 18:28:18.323187 containerd[1664]: time="2025-12-12T18:28:18.323122379Z" level=info msg="ImageCreate event name:\"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:28:18.327269 containerd[1664]: time="2025-12-12T18:28:18.327180115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:28:18.329246 containerd[1664]: time="2025-12-12T18:28:18.328655862Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"21061302\" in 2.365838569s" Dec 12 18:28:18.329246 containerd[1664]: time="2025-12-12T18:28:18.328701218Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\"" Dec 12 18:28:18.329843 containerd[1664]: time="2025-12-12T18:28:18.329799253Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 12 18:28:20.098942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2916457833.mount: Deactivated successfully. Dec 12 18:28:21.124414 containerd[1664]: time="2025-12-12T18:28:21.123433482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:28:21.125037 containerd[1664]: time="2025-12-12T18:28:21.125005711Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=31157702" Dec 12 18:28:21.125435 containerd[1664]: time="2025-12-12T18:28:21.125360562Z" level=info msg="ImageCreate event name:\"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:28:21.127783 containerd[1664]: time="2025-12-12T18:28:21.127738241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:28:21.129020 containerd[1664]: time="2025-12-12T18:28:21.128971263Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"31160442\" in 2.799002669s" Dec 12 18:28:21.129110 containerd[1664]: time="2025-12-12T18:28:21.129032167Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\"" Dec 12 18:28:21.129798 containerd[1664]: time="2025-12-12T18:28:21.129760753Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 12 18:28:21.775274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount997868558.mount: Deactivated successfully. Dec 12 18:28:23.395392 containerd[1664]: time="2025-12-12T18:28:23.395318549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:28:23.397458 containerd[1664]: time="2025-12-12T18:28:23.397414011Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=17570093" Dec 12 18:28:23.398221 containerd[1664]: time="2025-12-12T18:28:23.398135678Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:28:23.402209 containerd[1664]: time="2025-12-12T18:28:23.401438972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:28:23.403200 containerd[1664]: time="2025-12-12T18:28:23.402948113Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.27314178s" Dec 12 18:28:23.403200 containerd[1664]: time="2025-12-12T18:28:23.402996032Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Dec 12 18:28:23.403756 containerd[1664]: time="2025-12-12T18:28:23.403698110Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 12 18:28:24.253565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3304532776.mount: Deactivated successfully. Dec 12 18:28:24.267954 containerd[1664]: time="2025-12-12T18:28:24.267140148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:28:24.269105 containerd[1664]: time="2025-12-12T18:28:24.269067051Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 18:28:24.270422 containerd[1664]: time="2025-12-12T18:28:24.270384660Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:28:24.273028 containerd[1664]: time="2025-12-12T18:28:24.272989420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:28:24.274114 containerd[1664]: time="2025-12-12T18:28:24.274068730Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 870.328255ms" Dec 12 18:28:24.274223 containerd[1664]: time="2025-12-12T18:28:24.274119104Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 12 18:28:24.275023 containerd[1664]: time="2025-12-12T18:28:24.274991553Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 12 18:28:25.081382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3874792850.mount: Deactivated successfully. Dec 12 18:28:28.309953 update_engine[1635]: I20251212 18:28:28.309674 1635 update_attempter.cc:509] Updating boot flags... Dec 12 18:28:28.342753 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 12 18:28:28.349311 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:28:28.690452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:28:28.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:28.704433 kernel: audit: type=1130 audit(1765564108.689:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:28.713788 (kubelet)[2428]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:28:28.860720 kubelet[2428]: E1212 18:28:28.860648 2428 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:28:28.864710 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:28:28.864967 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:28:28.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 18:28:28.874768 kernel: audit: type=1131 audit(1765564108.866:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 18:28:28.866236 systemd[1]: kubelet.service: Consumed 264ms CPU time, 107.1M memory peak. Dec 12 18:28:30.996145 containerd[1664]: time="2025-12-12T18:28:30.996014234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:28:30.997507 containerd[1664]: time="2025-12-12T18:28:30.996823089Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=55728979" Dec 12 18:28:30.999181 containerd[1664]: time="2025-12-12T18:28:30.999095695Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:28:31.005181 containerd[1664]: time="2025-12-12T18:28:31.004545242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:28:31.006146 containerd[1664]: time="2025-12-12T18:28:31.006076808Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 6.731032839s" Dec 12 18:28:31.006244 containerd[1664]: time="2025-12-12T18:28:31.006153646Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Dec 12 18:28:35.260408 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:28:35.272883 kernel: audit: type=1130 audit(1765564115.259:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:35.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:35.260764 systemd[1]: kubelet.service: Consumed 264ms CPU time, 107.1M memory peak. Dec 12 18:28:35.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:35.280201 kernel: audit: type=1131 audit(1765564115.259:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:35.282021 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:28:35.327678 systemd[1]: Reload requested from client PID 2465 ('systemctl') (unit session-11.scope)... Dec 12 18:28:35.327735 systemd[1]: Reloading... Dec 12 18:28:35.523201 zram_generator::config[2512]: No configuration found. Dec 12 18:28:35.907768 systemd[1]: Reloading finished in 579 ms. Dec 12 18:28:35.953770 kernel: audit: type=1334 audit(1765564115.946:290): prog-id=65 op=LOAD Dec 12 18:28:35.953915 kernel: audit: type=1334 audit(1765564115.946:291): prog-id=50 op=UNLOAD Dec 12 18:28:35.953992 kernel: audit: type=1334 audit(1765564115.947:292): prog-id=66 op=LOAD Dec 12 18:28:35.946000 audit: BPF prog-id=65 op=LOAD Dec 12 18:28:35.946000 audit: BPF prog-id=50 op=UNLOAD Dec 12 18:28:35.947000 audit: BPF prog-id=66 op=LOAD Dec 12 18:28:35.947000 audit: BPF prog-id=67 op=LOAD Dec 12 18:28:35.957265 kernel: audit: type=1334 audit(1765564115.947:293): prog-id=67 op=LOAD Dec 12 18:28:35.957345 kernel: audit: type=1334 audit(1765564115.947:294): prog-id=51 op=UNLOAD Dec 12 18:28:35.947000 audit: BPF prog-id=51 op=UNLOAD Dec 12 18:28:35.958865 kernel: audit: type=1334 audit(1765564115.947:295): prog-id=52 op=UNLOAD Dec 12 18:28:35.947000 audit: BPF prog-id=52 op=UNLOAD Dec 12 18:28:35.960383 kernel: audit: type=1334 audit(1765564115.950:296): prog-id=68 op=LOAD Dec 12 18:28:35.950000 audit: BPF prog-id=68 op=LOAD Dec 12 18:28:35.962045 kernel: audit: type=1334 audit(1765564115.950:297): prog-id=53 op=UNLOAD Dec 12 18:28:35.950000 audit: BPF prog-id=53 op=UNLOAD Dec 12 18:28:35.953000 audit: BPF prog-id=69 op=LOAD Dec 12 18:28:35.953000 audit: BPF prog-id=70 op=LOAD Dec 12 18:28:35.953000 audit: BPF prog-id=54 op=UNLOAD Dec 12 18:28:35.953000 audit: BPF prog-id=55 op=UNLOAD Dec 12 18:28:35.955000 audit: BPF prog-id=71 op=LOAD Dec 12 18:28:35.955000 audit: BPF prog-id=56 op=UNLOAD Dec 12 18:28:35.956000 audit: BPF prog-id=72 op=LOAD Dec 12 18:28:35.956000 audit: BPF prog-id=57 op=UNLOAD Dec 12 18:28:35.959000 audit: BPF prog-id=73 op=LOAD Dec 12 18:28:35.959000 audit: BPF prog-id=64 op=UNLOAD Dec 12 18:28:35.973000 audit: BPF prog-id=74 op=LOAD Dec 12 18:28:35.973000 audit: BPF prog-id=47 op=UNLOAD Dec 12 18:28:35.973000 audit: BPF prog-id=75 op=LOAD Dec 12 18:28:35.973000 audit: BPF prog-id=76 op=LOAD Dec 12 18:28:35.973000 audit: BPF prog-id=48 op=UNLOAD Dec 12 18:28:35.973000 audit: BPF prog-id=49 op=UNLOAD Dec 12 18:28:35.974000 audit: BPF prog-id=77 op=LOAD Dec 12 18:28:35.974000 audit: BPF prog-id=44 op=UNLOAD Dec 12 18:28:35.974000 audit: BPF prog-id=78 op=LOAD Dec 12 18:28:35.974000 audit: BPF prog-id=79 op=LOAD Dec 12 18:28:35.974000 audit: BPF prog-id=45 op=UNLOAD Dec 12 18:28:35.974000 audit: BPF prog-id=46 op=UNLOAD Dec 12 18:28:35.976000 audit: BPF prog-id=80 op=LOAD Dec 12 18:28:35.976000 audit: BPF prog-id=41 op=UNLOAD Dec 12 18:28:35.976000 audit: BPF prog-id=81 op=LOAD Dec 12 18:28:35.976000 audit: BPF prog-id=82 op=LOAD Dec 12 18:28:35.976000 audit: BPF prog-id=42 op=UNLOAD Dec 12 18:28:35.976000 audit: BPF prog-id=43 op=UNLOAD Dec 12 18:28:35.978000 audit: BPF prog-id=83 op=LOAD Dec 12 18:28:35.978000 audit: BPF prog-id=58 op=UNLOAD Dec 12 18:28:35.979000 audit: BPF prog-id=84 op=LOAD Dec 12 18:28:35.979000 audit: BPF prog-id=85 op=LOAD Dec 12 18:28:35.979000 audit: BPF prog-id=59 op=UNLOAD Dec 12 18:28:35.979000 audit: BPF prog-id=60 op=UNLOAD Dec 12 18:28:36.002429 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 18:28:36.002566 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 18:28:36.003064 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:28:36.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 18:28:36.003152 systemd[1]: kubelet.service: Consumed 199ms CPU time, 98.4M memory peak. Dec 12 18:28:36.006057 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:28:36.274397 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:28:36.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:36.286642 (kubelet)[2580]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:28:36.363449 kubelet[2580]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:28:36.364377 kubelet[2580]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:28:36.364377 kubelet[2580]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:28:36.366222 kubelet[2580]: I1212 18:28:36.365775 2580 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:28:36.690994 kubelet[2580]: I1212 18:28:36.690483 2580 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 18:28:36.690994 kubelet[2580]: I1212 18:28:36.690536 2580 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:28:36.690994 kubelet[2580]: I1212 18:28:36.690926 2580 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 18:28:36.734048 kubelet[2580]: I1212 18:28:36.733941 2580 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:28:36.736764 kubelet[2580]: E1212 18:28:36.736689 2580 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.23.66:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.23.66:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:28:36.767485 kubelet[2580]: I1212 18:28:36.767411 2580 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:28:36.779014 kubelet[2580]: I1212 18:28:36.778979 2580 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 18:28:36.784631 kubelet[2580]: I1212 18:28:36.784560 2580 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:28:36.784939 kubelet[2580]: I1212 18:28:36.784618 2580 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-vv1nl.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:28:36.786679 kubelet[2580]: I1212 18:28:36.786609 2580 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:28:36.786679 kubelet[2580]: I1212 18:28:36.786641 2580 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 18:28:36.789373 kubelet[2580]: I1212 18:28:36.789324 2580 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:28:36.793302 kubelet[2580]: I1212 18:28:36.793138 2580 kubelet.go:446] "Attempting to sync node with API server" Dec 12 18:28:36.794084 kubelet[2580]: I1212 18:28:36.793828 2580 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:28:36.795611 kubelet[2580]: I1212 18:28:36.795181 2580 kubelet.go:352] "Adding apiserver pod source" Dec 12 18:28:36.795611 kubelet[2580]: I1212 18:28:36.795222 2580 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:28:36.798900 kubelet[2580]: W1212 18:28:36.798825 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.23.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-vv1nl.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.23.66:6443: connect: connection refused Dec 12 18:28:36.799074 kubelet[2580]: E1212 18:28:36.799040 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.23.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-vv1nl.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.23.66:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:28:36.801239 kubelet[2580]: I1212 18:28:36.801197 2580 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 18:28:36.804943 kubelet[2580]: I1212 18:28:36.804585 2580 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 18:28:36.808210 kubelet[2580]: W1212 18:28:36.808156 2580 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 18:28:36.809341 kubelet[2580]: I1212 18:28:36.809301 2580 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 18:28:36.809425 kubelet[2580]: I1212 18:28:36.809363 2580 server.go:1287] "Started kubelet" Dec 12 18:28:36.810305 kubelet[2580]: W1212 18:28:36.809573 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.23.66:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.23.66:6443: connect: connection refused Dec 12 18:28:36.810305 kubelet[2580]: E1212 18:28:36.809622 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.23.66:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.23.66:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:28:36.818856 kubelet[2580]: I1212 18:28:36.817714 2580 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:28:36.825828 kubelet[2580]: I1212 18:28:36.824825 2580 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:28:36.826344 kubelet[2580]: I1212 18:28:36.826315 2580 server.go:479] "Adding debug handlers to kubelet server" Dec 12 18:28:36.829781 kubelet[2580]: I1212 18:28:36.828999 2580 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:28:36.829781 kubelet[2580]: I1212 18:28:36.829385 2580 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:28:36.830769 kubelet[2580]: I1212 18:28:36.830724 2580 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:28:36.829000 audit[2591]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2591 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:36.829000 audit[2591]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffc0b3a700 a2=0 a3=0 items=0 ppid=2580 pid=2591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:36.829000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 18:28:36.832086 kubelet[2580]: I1212 18:28:36.831818 2580 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 18:28:36.832260 kubelet[2580]: E1212 18:28:36.832144 2580 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-vv1nl.gb1.brightbox.com\" not found" Dec 12 18:28:36.835500 kubelet[2580]: I1212 18:28:36.834584 2580 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 18:28:36.835500 kubelet[2580]: I1212 18:28:36.834688 2580 reconciler.go:26] "Reconciler: start to sync state" Dec 12 18:28:36.835850 kubelet[2580]: W1212 18:28:36.835769 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.23.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.23.66:6443: connect: connection refused Dec 12 18:28:36.835850 kubelet[2580]: E1212 18:28:36.835838 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.23.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.23.66:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:28:36.836986 kubelet[2580]: E1212 18:28:36.833468 2580 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.23.66:6443/api/v1/namespaces/default/events\": dial tcp 10.230.23.66:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-vv1nl.gb1.brightbox.com.18808b35ba98b020 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-vv1nl.gb1.brightbox.com,UID:srv-vv1nl.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-vv1nl.gb1.brightbox.com,},FirstTimestamp:2025-12-12 18:28:36.80933072 +0000 UTC m=+0.508024027,LastTimestamp:2025-12-12 18:28:36.80933072 +0000 UTC m=+0.508024027,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-vv1nl.gb1.brightbox.com,}" Dec 12 18:28:36.837705 kubelet[2580]: I1212 18:28:36.837576 2580 factory.go:221] Registration of the systemd container factory successfully Dec 12 18:28:36.837962 kubelet[2580]: I1212 18:28:36.837903 2580 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:28:36.842671 kubelet[2580]: E1212 18:28:36.842051 2580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.23.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-vv1nl.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.23.66:6443: connect: connection refused" interval="200ms" Dec 12 18:28:36.844140 kubelet[2580]: I1212 18:28:36.844104 2580 factory.go:221] Registration of the containerd container factory successfully Dec 12 18:28:36.846000 audit[2593]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2593 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:36.846000 audit[2593]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff3f6bcfb0 a2=0 a3=0 items=0 ppid=2580 pid=2593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:36.846000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 18:28:36.855482 kubelet[2580]: E1212 18:28:36.855409 2580 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:28:36.862000 audit[2598]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2598 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:36.862000 audit[2598]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcde2dd3b0 a2=0 a3=0 items=0 ppid=2580 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:36.862000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 18:28:36.875000 audit[2602]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2602 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:36.875000 audit[2602]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc20fcf110 a2=0 a3=0 items=0 ppid=2580 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:36.875000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 18:28:36.884544 kubelet[2580]: I1212 18:28:36.884515 2580 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:28:36.884725 kubelet[2580]: I1212 18:28:36.884701 2580 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:28:36.884908 kubelet[2580]: I1212 18:28:36.884887 2580 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:28:36.888131 kubelet[2580]: I1212 18:28:36.887838 2580 policy_none.go:49] "None policy: Start" Dec 12 18:28:36.888131 kubelet[2580]: I1212 18:28:36.887868 2580 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 18:28:36.888131 kubelet[2580]: I1212 18:28:36.887897 2580 state_mem.go:35] "Initializing new in-memory state store" Dec 12 18:28:36.895000 audit[2605]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2605 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:36.895000 audit[2605]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffd7f772d00 a2=0 a3=0 items=0 ppid=2580 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:36.895000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 12 18:28:36.897438 kubelet[2580]: I1212 18:28:36.897149 2580 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 18:28:36.898000 audit[2607]: NETFILTER_CFG table=mangle:47 family=2 entries=1 op=nft_register_chain pid=2607 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:36.898000 audit[2607]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffff2698290 a2=0 a3=0 items=0 ppid=2580 pid=2607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:36.898000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 18:28:36.899000 audit[2608]: NETFILTER_CFG table=mangle:48 family=10 entries=2 op=nft_register_chain pid=2608 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:36.899000 audit[2608]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffda4835a30 a2=0 a3=0 items=0 ppid=2580 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:36.899000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 18:28:36.900960 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 18:28:36.902773 kubelet[2580]: I1212 18:28:36.901973 2580 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 18:28:36.902773 kubelet[2580]: I1212 18:28:36.902017 2580 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 18:28:36.902773 kubelet[2580]: I1212 18:28:36.902084 2580 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:28:36.902773 kubelet[2580]: I1212 18:28:36.902099 2580 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 18:28:36.902773 kubelet[2580]: E1212 18:28:36.902224 2580 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:28:36.904135 kubelet[2580]: W1212 18:28:36.904091 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.23.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.23.66:6443: connect: connection refused Dec 12 18:28:36.904266 kubelet[2580]: E1212 18:28:36.904218 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.23.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.23.66:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:28:36.904000 audit[2610]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2610 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:36.904000 audit[2610]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc86d34bf0 a2=0 a3=0 items=0 ppid=2580 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:36.904000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 18:28:36.906000 audit[2609]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2609 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:36.906000 audit[2609]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc9f84a150 a2=0 a3=0 items=0 ppid=2580 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:36.906000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 18:28:36.907000 audit[2611]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2611 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:36.907000 audit[2611]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdedf64e70 a2=0 a3=0 items=0 ppid=2580 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:36.907000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 18:28:36.909000 audit[2614]: NETFILTER_CFG table=filter:52 family=10 entries=1 op=nft_register_chain pid=2614 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:36.909000 audit[2614]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc7a425210 a2=0 a3=0 items=0 ppid=2580 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:36.909000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 18:28:36.912000 audit[2613]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=2613 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:36.912000 audit[2613]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe9d7eabd0 a2=0 a3=0 items=0 ppid=2580 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:36.912000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 18:28:36.916340 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 18:28:36.932978 kubelet[2580]: E1212 18:28:36.932918 2580 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-vv1nl.gb1.brightbox.com\" not found" Dec 12 18:28:36.934149 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 18:28:36.936307 kubelet[2580]: I1212 18:28:36.936277 2580 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 18:28:36.936865 kubelet[2580]: I1212 18:28:36.936748 2580 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:28:36.936865 kubelet[2580]: I1212 18:28:36.936807 2580 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:28:36.939251 kubelet[2580]: I1212 18:28:36.939130 2580 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:28:36.940007 kubelet[2580]: E1212 18:28:36.939898 2580 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:28:36.940115 kubelet[2580]: E1212 18:28:36.940035 2580 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-vv1nl.gb1.brightbox.com\" not found" Dec 12 18:28:37.021200 systemd[1]: Created slice kubepods-burstable-podd7da52a5f424100778989806695fda1a.slice - libcontainer container kubepods-burstable-podd7da52a5f424100778989806695fda1a.slice. Dec 12 18:28:37.035469 kubelet[2580]: I1212 18:28:37.035146 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d7da52a5f424100778989806695fda1a-ca-certs\") pod \"kube-apiserver-srv-vv1nl.gb1.brightbox.com\" (UID: \"d7da52a5f424100778989806695fda1a\") " pod="kube-system/kube-apiserver-srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:37.035594 kubelet[2580]: I1212 18:28:37.035489 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55c7f79cd346f6af12ab183a3369d054-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-vv1nl.gb1.brightbox.com\" (UID: \"55c7f79cd346f6af12ab183a3369d054\") " pod="kube-system/kube-controller-manager-srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:37.035594 kubelet[2580]: I1212 18:28:37.035526 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/087fd063dd8eadf42328549e1f1e3e9c-kubeconfig\") pod \"kube-scheduler-srv-vv1nl.gb1.brightbox.com\" (UID: \"087fd063dd8eadf42328549e1f1e3e9c\") " pod="kube-system/kube-scheduler-srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:37.035594 kubelet[2580]: I1212 18:28:37.035557 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55c7f79cd346f6af12ab183a3369d054-kubeconfig\") pod \"kube-controller-manager-srv-vv1nl.gb1.brightbox.com\" (UID: \"55c7f79cd346f6af12ab183a3369d054\") " pod="kube-system/kube-controller-manager-srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:37.035594 kubelet[2580]: I1212 18:28:37.035585 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d7da52a5f424100778989806695fda1a-k8s-certs\") pod \"kube-apiserver-srv-vv1nl.gb1.brightbox.com\" (UID: \"d7da52a5f424100778989806695fda1a\") " pod="kube-system/kube-apiserver-srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:37.035859 kubelet[2580]: I1212 18:28:37.035613 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d7da52a5f424100778989806695fda1a-usr-share-ca-certificates\") pod \"kube-apiserver-srv-vv1nl.gb1.brightbox.com\" (UID: \"d7da52a5f424100778989806695fda1a\") " pod="kube-system/kube-apiserver-srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:37.035859 kubelet[2580]: I1212 18:28:37.035639 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55c7f79cd346f6af12ab183a3369d054-ca-certs\") pod \"kube-controller-manager-srv-vv1nl.gb1.brightbox.com\" (UID: \"55c7f79cd346f6af12ab183a3369d054\") " pod="kube-system/kube-controller-manager-srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:37.035859 kubelet[2580]: I1212 18:28:37.035670 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55c7f79cd346f6af12ab183a3369d054-flexvolume-dir\") pod \"kube-controller-manager-srv-vv1nl.gb1.brightbox.com\" (UID: \"55c7f79cd346f6af12ab183a3369d054\") " pod="kube-system/kube-controller-manager-srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:37.035859 kubelet[2580]: I1212 18:28:37.035748 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55c7f79cd346f6af12ab183a3369d054-k8s-certs\") pod \"kube-controller-manager-srv-vv1nl.gb1.brightbox.com\" (UID: \"55c7f79cd346f6af12ab183a3369d054\") " pod="kube-system/kube-controller-manager-srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:37.037631 kubelet[2580]: E1212 18:28:37.037599 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-vv1nl.gb1.brightbox.com\" not found" node="srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:37.046182 kubelet[2580]: I1212 18:28:37.044454 2580 kubelet_node_status.go:75] "Attempting to register node" node="srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:37.046182 kubelet[2580]: E1212 18:28:37.044559 2580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.23.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-vv1nl.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.23.66:6443: connect: connection refused" interval="400ms" Dec 12 18:28:37.046182 kubelet[2580]: E1212 18:28:37.044861 2580 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.23.66:6443/api/v1/nodes\": dial tcp 10.230.23.66:6443: connect: connection refused" node="srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:37.045708 systemd[1]: Created slice kubepods-burstable-pod55c7f79cd346f6af12ab183a3369d054.slice - libcontainer container kubepods-burstable-pod55c7f79cd346f6af12ab183a3369d054.slice. Dec 12 18:28:37.059915 kubelet[2580]: E1212 18:28:37.059530 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-vv1nl.gb1.brightbox.com\" not found" node="srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:37.063556 systemd[1]: Created slice kubepods-burstable-pod087fd063dd8eadf42328549e1f1e3e9c.slice - libcontainer container kubepods-burstable-pod087fd063dd8eadf42328549e1f1e3e9c.slice. Dec 12 18:28:37.066577 kubelet[2580]: E1212 18:28:37.066547 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-vv1nl.gb1.brightbox.com\" not found" node="srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:37.248935 kubelet[2580]: I1212 18:28:37.248887 2580 kubelet_node_status.go:75] "Attempting to register node" node="srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:37.249503 kubelet[2580]: E1212 18:28:37.249430 2580 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.23.66:6443/api/v1/nodes\": dial tcp 10.230.23.66:6443: connect: connection refused" node="srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:37.339913 containerd[1664]: time="2025-12-12T18:28:37.339682881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-vv1nl.gb1.brightbox.com,Uid:d7da52a5f424100778989806695fda1a,Namespace:kube-system,Attempt:0,}" Dec 12 18:28:37.364431 containerd[1664]: time="2025-12-12T18:28:37.364237255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-vv1nl.gb1.brightbox.com,Uid:55c7f79cd346f6af12ab183a3369d054,Namespace:kube-system,Attempt:0,}" Dec 12 18:28:37.378918 containerd[1664]: time="2025-12-12T18:28:37.378145561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-vv1nl.gb1.brightbox.com,Uid:087fd063dd8eadf42328549e1f1e3e9c,Namespace:kube-system,Attempt:0,}" Dec 12 18:28:37.450383 kubelet[2580]: E1212 18:28:37.450268 2580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.23.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-vv1nl.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.23.66:6443: connect: connection refused" interval="800ms" Dec 12 18:28:37.485031 containerd[1664]: time="2025-12-12T18:28:37.484812695Z" level=info msg="connecting to shim 29df68577b7f61d9e84096dbf908b22a0c1ce028b684ff43e2e7f8a209724da6" address="unix:///run/containerd/s/1ec00f622799f4826ba40562c9e319e9be0ef8442bd0d9b66e3d9360f8099dd1" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:28:37.487680 containerd[1664]: time="2025-12-12T18:28:37.487636542Z" level=info msg="connecting to shim 3338af5c2fb0a0cb5603fc3f9efee124c1d0e501f1bd21f433a7d24eca693110" address="unix:///run/containerd/s/2a1711773cad7dda8cb4e8ff674e3e83f3b1d8821b62491ba5630469920b0910" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:28:37.491254 containerd[1664]: time="2025-12-12T18:28:37.491132900Z" level=info msg="connecting to shim b84ddf0bfd893ed1551e9ac33e409efc41f3fe546b53580100a19f4ffba5c48d" address="unix:///run/containerd/s/69cf899bd399833ecaca711b25af81d4480d9a0eb99e2fc72f882f9d22a193cf" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:28:37.626424 systemd[1]: Started cri-containerd-29df68577b7f61d9e84096dbf908b22a0c1ce028b684ff43e2e7f8a209724da6.scope - libcontainer container 29df68577b7f61d9e84096dbf908b22a0c1ce028b684ff43e2e7f8a209724da6. Dec 12 18:28:37.637456 systemd[1]: Started cri-containerd-3338af5c2fb0a0cb5603fc3f9efee124c1d0e501f1bd21f433a7d24eca693110.scope - libcontainer container 3338af5c2fb0a0cb5603fc3f9efee124c1d0e501f1bd21f433a7d24eca693110. Dec 12 18:28:37.640033 systemd[1]: Started cri-containerd-b84ddf0bfd893ed1551e9ac33e409efc41f3fe546b53580100a19f4ffba5c48d.scope - libcontainer container b84ddf0bfd893ed1551e9ac33e409efc41f3fe546b53580100a19f4ffba5c48d. Dec 12 18:28:37.655322 kubelet[2580]: I1212 18:28:37.654740 2580 kubelet_node_status.go:75] "Attempting to register node" node="srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:37.655863 kubelet[2580]: E1212 18:28:37.655742 2580 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.23.66:6443/api/v1/nodes\": dial tcp 10.230.23.66:6443: connect: connection refused" node="srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:37.677000 audit: BPF prog-id=86 op=LOAD Dec 12 18:28:37.679000 audit: BPF prog-id=87 op=LOAD Dec 12 18:28:37.679000 audit[2671]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2642 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:37.679000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238346464663062666438393365643135353165396163333365343039 Dec 12 18:28:37.679000 audit: BPF prog-id=87 op=UNLOAD Dec 12 18:28:37.679000 audit[2671]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2642 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:37.679000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238346464663062666438393365643135353165396163333365343039 Dec 12 18:28:37.679000 audit: BPF prog-id=88 op=LOAD Dec 12 18:28:37.679000 audit[2671]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2642 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:37.679000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238346464663062666438393365643135353165396163333365343039 Dec 12 18:28:37.679000 audit: BPF prog-id=89 op=LOAD Dec 12 18:28:37.679000 audit[2671]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2642 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:37.679000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238346464663062666438393365643135353165396163333365343039 Dec 12 18:28:37.679000 audit: BPF prog-id=89 op=UNLOAD Dec 12 18:28:37.679000 audit[2671]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2642 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:37.679000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238346464663062666438393365643135353165396163333365343039 Dec 12 18:28:37.680000 audit: BPF prog-id=88 op=UNLOAD Dec 12 18:28:37.680000 audit[2671]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2642 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:37.680000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238346464663062666438393365643135353165396163333365343039 Dec 12 18:28:37.680000 audit: BPF prog-id=90 op=LOAD Dec 12 18:28:37.680000 audit: BPF prog-id=91 op=LOAD Dec 12 18:28:37.680000 audit[2671]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2642 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:37.680000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238346464663062666438393365643135353165396163333365343039 Dec 12 18:28:37.682000 audit: BPF prog-id=92 op=LOAD Dec 12 18:28:37.682000 audit[2676]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2643 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:37.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333333861663563326662306130636235363033666333663965666565 Dec 12 18:28:37.682000 audit: BPF prog-id=92 op=UNLOAD Dec 12 18:28:37.682000 audit[2676]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2643 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:37.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333333861663563326662306130636235363033666333663965666565 Dec 12 18:28:37.684000 audit: BPF prog-id=93 op=LOAD Dec 12 18:28:37.684000 audit[2676]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2643 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:37.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333333861663563326662306130636235363033666333663965666565 Dec 12 18:28:37.684000 audit: BPF prog-id=94 op=LOAD Dec 12 18:28:37.684000 audit[2676]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=2643 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:37.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333333861663563326662306130636235363033666333663965666565 Dec 12 18:28:37.686000 audit: BPF prog-id=94 op=UNLOAD Dec 12 18:28:37.686000 audit[2676]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2643 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:37.686000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333333861663563326662306130636235363033666333663965666565 Dec 12 18:28:37.687000 audit: BPF prog-id=93 op=UNLOAD Dec 12 18:28:37.687000 audit[2676]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2643 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:37.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333333861663563326662306130636235363033666333663965666565 Dec 12 18:28:37.687000 audit: BPF prog-id=95 op=LOAD Dec 12 18:28:37.687000 audit[2676]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=2643 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:37.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333333861663563326662306130636235363033666333663965666565 Dec 12 18:28:37.688000 audit: BPF prog-id=96 op=LOAD Dec 12 18:28:37.689000 audit: BPF prog-id=97 op=LOAD Dec 12 18:28:37.689000 audit[2673]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2640 pid=2673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:37.689000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239646636383537376237663631643965383430393664626639303862 Dec 12 18:28:37.690000 audit: BPF prog-id=97 op=UNLOAD Dec 12 18:28:37.690000 audit[2673]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2640 pid=2673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:37.690000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239646636383537376237663631643965383430393664626639303862 Dec 12 18:28:37.690000 audit: BPF prog-id=98 op=LOAD Dec 12 18:28:37.690000 audit[2673]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2640 pid=2673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:37.690000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239646636383537376237663631643965383430393664626639303862 Dec 12 18:28:37.691000 audit: BPF prog-id=99 op=LOAD Dec 12 18:28:37.691000 audit[2673]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2640 pid=2673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:37.691000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239646636383537376237663631643965383430393664626639303862 Dec 12 18:28:37.691000 audit: BPF prog-id=99 op=UNLOAD Dec 12 18:28:37.691000 audit[2673]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2640 pid=2673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:37.691000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239646636383537376237663631643965383430393664626639303862 Dec 12 18:28:37.692000 audit: BPF prog-id=98 op=UNLOAD Dec 12 18:28:37.692000 audit[2673]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2640 pid=2673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:37.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239646636383537376237663631643965383430393664626639303862 Dec 12 18:28:37.692000 audit: BPF prog-id=100 op=LOAD Dec 12 18:28:37.692000 audit[2673]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2640 pid=2673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:37.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239646636383537376237663631643965383430393664626639303862 Dec 12 18:28:37.751575 kubelet[2580]: W1212 18:28:37.751027 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.23.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-vv1nl.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.23.66:6443: connect: connection refused Dec 12 18:28:37.751575 kubelet[2580]: E1212 18:28:37.751127 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.23.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-vv1nl.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.23.66:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:28:37.809034 containerd[1664]: time="2025-12-12T18:28:37.808919295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-vv1nl.gb1.brightbox.com,Uid:d7da52a5f424100778989806695fda1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b84ddf0bfd893ed1551e9ac33e409efc41f3fe546b53580100a19f4ffba5c48d\"" Dec 12 18:28:37.814142 containerd[1664]: time="2025-12-12T18:28:37.813739988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-vv1nl.gb1.brightbox.com,Uid:55c7f79cd346f6af12ab183a3369d054,Namespace:kube-system,Attempt:0,} returns sandbox id \"3338af5c2fb0a0cb5603fc3f9efee124c1d0e501f1bd21f433a7d24eca693110\"" Dec 12 18:28:37.814423 containerd[1664]: time="2025-12-12T18:28:37.813924144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-vv1nl.gb1.brightbox.com,Uid:087fd063dd8eadf42328549e1f1e3e9c,Namespace:kube-system,Attempt:0,} returns sandbox id \"29df68577b7f61d9e84096dbf908b22a0c1ce028b684ff43e2e7f8a209724da6\"" Dec 12 18:28:37.819609 containerd[1664]: time="2025-12-12T18:28:37.819072623Z" level=info msg="CreateContainer within sandbox \"3338af5c2fb0a0cb5603fc3f9efee124c1d0e501f1bd21f433a7d24eca693110\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 18:28:37.821936 containerd[1664]: time="2025-12-12T18:28:37.821741019Z" level=info msg="CreateContainer within sandbox \"b84ddf0bfd893ed1551e9ac33e409efc41f3fe546b53580100a19f4ffba5c48d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 18:28:37.824381 containerd[1664]: time="2025-12-12T18:28:37.824342206Z" level=info msg="CreateContainer within sandbox \"29df68577b7f61d9e84096dbf908b22a0c1ce028b684ff43e2e7f8a209724da6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 18:28:37.834980 containerd[1664]: time="2025-12-12T18:28:37.834915380Z" level=info msg="Container ddad351bb3ee6e7bbbf39fe0389bc992120e9d96562fe8cf427ae6e62d206234: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:28:37.841191 containerd[1664]: time="2025-12-12T18:28:37.841015434Z" level=info msg="Container 300f5f7560e242986e874d1dc890fcca0a520fe5740444cb0a3aae336138f62b: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:28:37.848400 containerd[1664]: time="2025-12-12T18:28:37.848349671Z" level=info msg="Container 9f2241669fc22610c110fd9ef52e0110e5796108b59c06b9b0a9ca64698adcc1: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:28:37.849121 containerd[1664]: time="2025-12-12T18:28:37.849054271Z" level=info msg="CreateContainer within sandbox \"3338af5c2fb0a0cb5603fc3f9efee124c1d0e501f1bd21f433a7d24eca693110\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ddad351bb3ee6e7bbbf39fe0389bc992120e9d96562fe8cf427ae6e62d206234\"" Dec 12 18:28:37.850989 containerd[1664]: time="2025-12-12T18:28:37.850712613Z" level=info msg="StartContainer for \"ddad351bb3ee6e7bbbf39fe0389bc992120e9d96562fe8cf427ae6e62d206234\"" Dec 12 18:28:37.857340 containerd[1664]: time="2025-12-12T18:28:37.857281102Z" level=info msg="CreateContainer within sandbox \"b84ddf0bfd893ed1551e9ac33e409efc41f3fe546b53580100a19f4ffba5c48d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"300f5f7560e242986e874d1dc890fcca0a520fe5740444cb0a3aae336138f62b\"" Dec 12 18:28:37.858290 containerd[1664]: time="2025-12-12T18:28:37.858258803Z" level=info msg="StartContainer for \"300f5f7560e242986e874d1dc890fcca0a520fe5740444cb0a3aae336138f62b\"" Dec 12 18:28:37.860129 containerd[1664]: time="2025-12-12T18:28:37.860093802Z" level=info msg="connecting to shim 300f5f7560e242986e874d1dc890fcca0a520fe5740444cb0a3aae336138f62b" address="unix:///run/containerd/s/69cf899bd399833ecaca711b25af81d4480d9a0eb99e2fc72f882f9d22a193cf" protocol=ttrpc version=3 Dec 12 18:28:37.860445 containerd[1664]: time="2025-12-12T18:28:37.860187749Z" level=info msg="connecting to shim ddad351bb3ee6e7bbbf39fe0389bc992120e9d96562fe8cf427ae6e62d206234" address="unix:///run/containerd/s/2a1711773cad7dda8cb4e8ff674e3e83f3b1d8821b62491ba5630469920b0910" protocol=ttrpc version=3 Dec 12 18:28:37.885966 containerd[1664]: time="2025-12-12T18:28:37.883474085Z" level=info msg="CreateContainer within sandbox \"29df68577b7f61d9e84096dbf908b22a0c1ce028b684ff43e2e7f8a209724da6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9f2241669fc22610c110fd9ef52e0110e5796108b59c06b9b0a9ca64698adcc1\"" Dec 12 18:28:37.886278 containerd[1664]: time="2025-12-12T18:28:37.886207817Z" level=info msg="StartContainer for \"9f2241669fc22610c110fd9ef52e0110e5796108b59c06b9b0a9ca64698adcc1\"" Dec 12 18:28:37.890203 containerd[1664]: time="2025-12-12T18:28:37.890086751Z" level=info msg="connecting to shim 9f2241669fc22610c110fd9ef52e0110e5796108b59c06b9b0a9ca64698adcc1" address="unix:///run/containerd/s/1ec00f622799f4826ba40562c9e319e9be0ef8442bd0d9b66e3d9360f8099dd1" protocol=ttrpc version=3 Dec 12 18:28:37.904946 kubelet[2580]: W1212 18:28:37.904806 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.23.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.23.66:6443: connect: connection refused Dec 12 18:28:37.905636 kubelet[2580]: E1212 18:28:37.905557 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.23.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.23.66:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:28:37.908681 systemd[1]: Started cri-containerd-ddad351bb3ee6e7bbbf39fe0389bc992120e9d96562fe8cf427ae6e62d206234.scope - libcontainer container ddad351bb3ee6e7bbbf39fe0389bc992120e9d96562fe8cf427ae6e62d206234. Dec 12 18:28:37.923601 systemd[1]: Started cri-containerd-300f5f7560e242986e874d1dc890fcca0a520fe5740444cb0a3aae336138f62b.scope - libcontainer container 300f5f7560e242986e874d1dc890fcca0a520fe5740444cb0a3aae336138f62b. Dec 12 18:28:37.972489 systemd[1]: Started cri-containerd-9f2241669fc22610c110fd9ef52e0110e5796108b59c06b9b0a9ca64698adcc1.scope - libcontainer container 9f2241669fc22610c110fd9ef52e0110e5796108b59c06b9b0a9ca64698adcc1. Dec 12 18:28:37.979000 audit: BPF prog-id=101 op=LOAD Dec 12 18:28:37.980000 audit: BPF prog-id=102 op=LOAD Dec 12 18:28:37.980000 audit[2755]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2643 pid=2755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:37.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464616433353162623365653665376262626633396665303338396263 Dec 12 18:28:37.980000 audit: BPF prog-id=102 op=UNLOAD Dec 12 18:28:37.980000 audit[2755]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2643 pid=2755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:37.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464616433353162623365653665376262626633396665303338396263 Dec 12 18:28:37.980000 audit: BPF prog-id=103 op=LOAD Dec 12 18:28:37.980000 audit[2755]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2643 pid=2755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:37.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464616433353162623365653665376262626633396665303338396263 Dec 12 18:28:37.980000 audit: BPF prog-id=104 op=LOAD Dec 12 18:28:37.980000 audit[2755]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2643 pid=2755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:37.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464616433353162623365653665376262626633396665303338396263 Dec 12 18:28:37.981000 audit: BPF prog-id=104 op=UNLOAD Dec 12 18:28:37.981000 audit[2755]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2643 pid=2755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:37.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464616433353162623365653665376262626633396665303338396263 Dec 12 18:28:37.981000 audit: BPF prog-id=103 op=UNLOAD Dec 12 18:28:37.981000 audit[2755]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2643 pid=2755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:37.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464616433353162623365653665376262626633396665303338396263 Dec 12 18:28:37.983000 audit: BPF prog-id=105 op=LOAD Dec 12 18:28:37.983000 audit[2755]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2643 pid=2755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:37.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464616433353162623365653665376262626633396665303338396263 Dec 12 18:28:37.985000 audit: BPF prog-id=106 op=LOAD Dec 12 18:28:37.986000 audit: BPF prog-id=107 op=LOAD Dec 12 18:28:37.986000 audit[2756]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2642 pid=2756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:37.986000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330306635663735363065323432393836653837346431646338393066 Dec 12 18:28:37.986000 audit: BPF prog-id=107 op=UNLOAD Dec 12 18:28:37.986000 audit[2756]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2642 pid=2756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:37.986000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330306635663735363065323432393836653837346431646338393066 Dec 12 18:28:37.986000 audit: BPF prog-id=108 op=LOAD Dec 12 18:28:37.986000 audit[2756]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2642 pid=2756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:37.986000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330306635663735363065323432393836653837346431646338393066 Dec 12 18:28:37.987000 audit: BPF prog-id=109 op=LOAD Dec 12 18:28:37.987000 audit[2756]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2642 pid=2756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:37.987000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330306635663735363065323432393836653837346431646338393066 Dec 12 18:28:37.987000 audit: BPF prog-id=109 op=UNLOAD Dec 12 18:28:37.987000 audit[2756]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2642 pid=2756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:37.987000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330306635663735363065323432393836653837346431646338393066 Dec 12 18:28:37.987000 audit: BPF prog-id=108 op=UNLOAD Dec 12 18:28:37.987000 audit[2756]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2642 pid=2756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:37.987000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330306635663735363065323432393836653837346431646338393066 Dec 12 18:28:37.987000 audit: BPF prog-id=110 op=LOAD Dec 12 18:28:37.987000 audit[2756]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2642 pid=2756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:37.987000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330306635663735363065323432393836653837346431646338393066 Dec 12 18:28:37.998228 kubelet[2580]: W1212 18:28:37.998110 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.23.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.23.66:6443: connect: connection refused Dec 12 18:28:37.998384 kubelet[2580]: E1212 18:28:37.998249 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.23.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.23.66:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:28:38.036000 audit: BPF prog-id=111 op=LOAD Dec 12 18:28:38.037000 audit: BPF prog-id=112 op=LOAD Dec 12 18:28:38.037000 audit[2780]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2640 pid=2780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:38.037000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966323234313636396663323236313063313130666439656635326530 Dec 12 18:28:38.037000 audit: BPF prog-id=112 op=UNLOAD Dec 12 18:28:38.037000 audit[2780]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2640 pid=2780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:38.037000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966323234313636396663323236313063313130666439656635326530 Dec 12 18:28:38.037000 audit: BPF prog-id=113 op=LOAD Dec 12 18:28:38.037000 audit[2780]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2640 pid=2780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:38.037000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966323234313636396663323236313063313130666439656635326530 Dec 12 18:28:38.037000 audit: BPF prog-id=114 op=LOAD Dec 12 18:28:38.037000 audit[2780]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2640 pid=2780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:38.037000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966323234313636396663323236313063313130666439656635326530 Dec 12 18:28:38.038000 audit: BPF prog-id=114 op=UNLOAD Dec 12 18:28:38.038000 audit[2780]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2640 pid=2780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:38.038000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966323234313636396663323236313063313130666439656635326530 Dec 12 18:28:38.038000 audit: BPF prog-id=113 op=UNLOAD Dec 12 18:28:38.038000 audit[2780]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2640 pid=2780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:38.038000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966323234313636396663323236313063313130666439656635326530 Dec 12 18:28:38.038000 audit: BPF prog-id=115 op=LOAD Dec 12 18:28:38.038000 audit[2780]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2640 pid=2780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:38.038000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966323234313636396663323236313063313130666439656635326530 Dec 12 18:28:38.068883 containerd[1664]: time="2025-12-12T18:28:38.068819709Z" level=info msg="StartContainer for \"300f5f7560e242986e874d1dc890fcca0a520fe5740444cb0a3aae336138f62b\" returns successfully" Dec 12 18:28:38.091098 containerd[1664]: time="2025-12-12T18:28:38.091043554Z" level=info msg="StartContainer for \"ddad351bb3ee6e7bbbf39fe0389bc992120e9d96562fe8cf427ae6e62d206234\" returns successfully" Dec 12 18:28:38.140627 containerd[1664]: time="2025-12-12T18:28:38.140294013Z" level=info msg="StartContainer for \"9f2241669fc22610c110fd9ef52e0110e5796108b59c06b9b0a9ca64698adcc1\" returns successfully" Dec 12 18:28:38.251646 kubelet[2580]: E1212 18:28:38.251580 2580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.23.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-vv1nl.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.23.66:6443: connect: connection refused" interval="1.6s" Dec 12 18:28:38.290538 kubelet[2580]: W1212 18:28:38.289328 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.23.66:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.23.66:6443: connect: connection refused Dec 12 18:28:38.290915 kubelet[2580]: E1212 18:28:38.290661 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.23.66:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.23.66:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:28:38.463373 kubelet[2580]: I1212 18:28:38.462932 2580 kubelet_node_status.go:75] "Attempting to register node" node="srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:38.465782 kubelet[2580]: E1212 18:28:38.465702 2580 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.23.66:6443/api/v1/nodes\": dial tcp 10.230.23.66:6443: connect: connection refused" node="srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:38.946300 kubelet[2580]: E1212 18:28:38.943893 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-vv1nl.gb1.brightbox.com\" not found" node="srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:38.949078 kubelet[2580]: E1212 18:28:38.949033 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-vv1nl.gb1.brightbox.com\" not found" node="srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:38.951258 kubelet[2580]: E1212 18:28:38.951038 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-vv1nl.gb1.brightbox.com\" not found" node="srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:39.956305 kubelet[2580]: E1212 18:28:39.954108 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-vv1nl.gb1.brightbox.com\" not found" node="srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:39.956305 kubelet[2580]: E1212 18:28:39.954126 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-vv1nl.gb1.brightbox.com\" not found" node="srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:39.957796 kubelet[2580]: E1212 18:28:39.957596 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-vv1nl.gb1.brightbox.com\" not found" node="srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:40.072033 kubelet[2580]: I1212 18:28:40.071209 2580 kubelet_node_status.go:75] "Attempting to register node" node="srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:40.959462 kubelet[2580]: E1212 18:28:40.959422 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-vv1nl.gb1.brightbox.com\" not found" node="srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:40.962035 kubelet[2580]: E1212 18:28:40.961991 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-vv1nl.gb1.brightbox.com\" not found" node="srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:41.255112 kubelet[2580]: E1212 18:28:41.255025 2580 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-vv1nl.gb1.brightbox.com\" not found" node="srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:41.302883 kubelet[2580]: E1212 18:28:41.302658 2580 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-vv1nl.gb1.brightbox.com.18808b35ba98b020 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-vv1nl.gb1.brightbox.com,UID:srv-vv1nl.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-vv1nl.gb1.brightbox.com,},FirstTimestamp:2025-12-12 18:28:36.80933072 +0000 UTC m=+0.508024027,LastTimestamp:2025-12-12 18:28:36.80933072 +0000 UTC m=+0.508024027,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-vv1nl.gb1.brightbox.com,}" Dec 12 18:28:41.371192 kubelet[2580]: I1212 18:28:41.369961 2580 kubelet_node_status.go:78] "Successfully registered node" node="srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:41.371192 kubelet[2580]: E1212 18:28:41.370048 2580 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-vv1nl.gb1.brightbox.com\": node \"srv-vv1nl.gb1.brightbox.com\" not found" Dec 12 18:28:41.372723 kubelet[2580]: E1212 18:28:41.372570 2580 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-vv1nl.gb1.brightbox.com.18808b35bd57300b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-vv1nl.gb1.brightbox.com,UID:srv-vv1nl.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:srv-vv1nl.gb1.brightbox.com,},FirstTimestamp:2025-12-12 18:28:36.855369739 +0000 UTC m=+0.554063060,LastTimestamp:2025-12-12 18:28:36.855369739 +0000 UTC m=+0.554063060,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-vv1nl.gb1.brightbox.com,}" Dec 12 18:28:41.433459 kubelet[2580]: I1212 18:28:41.433400 2580 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:41.446425 kubelet[2580]: E1212 18:28:41.446374 2580 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-vv1nl.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:41.446425 kubelet[2580]: I1212 18:28:41.446418 2580 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:41.453803 kubelet[2580]: E1212 18:28:41.453567 2580 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-vv1nl.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:41.453803 kubelet[2580]: I1212 18:28:41.453606 2580 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:41.458552 kubelet[2580]: E1212 18:28:41.458515 2580 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-vv1nl.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:41.813850 kubelet[2580]: I1212 18:28:41.813397 2580 apiserver.go:52] "Watching apiserver" Dec 12 18:28:41.836288 kubelet[2580]: I1212 18:28:41.835575 2580 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 18:28:41.956642 kubelet[2580]: I1212 18:28:41.956586 2580 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:41.959522 kubelet[2580]: E1212 18:28:41.959491 2580 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-vv1nl.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:43.281791 systemd[1]: Reload requested from client PID 2854 ('systemctl') (unit session-11.scope)... Dec 12 18:28:43.281830 systemd[1]: Reloading... Dec 12 18:28:43.433261 zram_generator::config[2907]: No configuration found. Dec 12 18:28:43.839595 systemd[1]: Reloading finished in 557 ms. Dec 12 18:28:43.883871 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:28:43.899810 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 18:28:43.900616 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:28:43.904230 kernel: kauditd_printk_skb: 204 callbacks suppressed Dec 12 18:28:43.904377 kernel: audit: type=1131 audit(1765564123.899:394): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:43.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:43.906392 systemd[1]: kubelet.service: Consumed 1.150s CPU time, 129.9M memory peak. Dec 12 18:28:43.913438 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:28:43.920187 kernel: audit: type=1334 audit(1765564123.917:395): prog-id=116 op=LOAD Dec 12 18:28:43.917000 audit: BPF prog-id=116 op=LOAD Dec 12 18:28:43.917000 audit: BPF prog-id=83 op=UNLOAD Dec 12 18:28:43.917000 audit: BPF prog-id=117 op=LOAD Dec 12 18:28:43.925233 kernel: audit: type=1334 audit(1765564123.917:396): prog-id=83 op=UNLOAD Dec 12 18:28:43.925309 kernel: audit: type=1334 audit(1765564123.917:397): prog-id=117 op=LOAD Dec 12 18:28:43.917000 audit: BPF prog-id=118 op=LOAD Dec 12 18:28:43.931844 kernel: audit: type=1334 audit(1765564123.917:398): prog-id=118 op=LOAD Dec 12 18:28:43.931917 kernel: audit: type=1334 audit(1765564123.917:399): prog-id=84 op=UNLOAD Dec 12 18:28:43.931959 kernel: audit: type=1334 audit(1765564123.917:400): prog-id=85 op=UNLOAD Dec 12 18:28:43.917000 audit: BPF prog-id=84 op=UNLOAD Dec 12 18:28:43.917000 audit: BPF prog-id=85 op=UNLOAD Dec 12 18:28:43.920000 audit: BPF prog-id=119 op=LOAD Dec 12 18:28:43.934848 kernel: audit: type=1334 audit(1765564123.920:401): prog-id=119 op=LOAD Dec 12 18:28:43.934935 kernel: audit: type=1334 audit(1765564123.920:402): prog-id=77 op=UNLOAD Dec 12 18:28:43.920000 audit: BPF prog-id=77 op=UNLOAD Dec 12 18:28:43.920000 audit: BPF prog-id=120 op=LOAD Dec 12 18:28:43.920000 audit: BPF prog-id=121 op=LOAD Dec 12 18:28:43.920000 audit: BPF prog-id=78 op=UNLOAD Dec 12 18:28:43.920000 audit: BPF prog-id=79 op=UNLOAD Dec 12 18:28:43.922000 audit: BPF prog-id=122 op=LOAD Dec 12 18:28:43.922000 audit: BPF prog-id=74 op=UNLOAD Dec 12 18:28:43.922000 audit: BPF prog-id=123 op=LOAD Dec 12 18:28:43.940210 kernel: audit: type=1334 audit(1765564123.920:403): prog-id=120 op=LOAD Dec 12 18:28:43.922000 audit: BPF prog-id=124 op=LOAD Dec 12 18:28:43.922000 audit: BPF prog-id=75 op=UNLOAD Dec 12 18:28:43.922000 audit: BPF prog-id=76 op=UNLOAD Dec 12 18:28:43.922000 audit: BPF prog-id=125 op=LOAD Dec 12 18:28:43.922000 audit: BPF prog-id=126 op=LOAD Dec 12 18:28:43.922000 audit: BPF prog-id=69 op=UNLOAD Dec 12 18:28:43.922000 audit: BPF prog-id=70 op=UNLOAD Dec 12 18:28:43.925000 audit: BPF prog-id=127 op=LOAD Dec 12 18:28:43.925000 audit: BPF prog-id=73 op=UNLOAD Dec 12 18:28:43.926000 audit: BPF prog-id=128 op=LOAD Dec 12 18:28:43.926000 audit: BPF prog-id=71 op=UNLOAD Dec 12 18:28:43.928000 audit: BPF prog-id=129 op=LOAD Dec 12 18:28:43.928000 audit: BPF prog-id=72 op=UNLOAD Dec 12 18:28:43.931000 audit: BPF prog-id=130 op=LOAD Dec 12 18:28:43.935000 audit: BPF prog-id=68 op=UNLOAD Dec 12 18:28:43.937000 audit: BPF prog-id=131 op=LOAD Dec 12 18:28:43.937000 audit: BPF prog-id=80 op=UNLOAD Dec 12 18:28:43.937000 audit: BPF prog-id=132 op=LOAD Dec 12 18:28:43.937000 audit: BPF prog-id=133 op=LOAD Dec 12 18:28:43.937000 audit: BPF prog-id=81 op=UNLOAD Dec 12 18:28:43.937000 audit: BPF prog-id=82 op=UNLOAD Dec 12 18:28:43.940000 audit: BPF prog-id=134 op=LOAD Dec 12 18:28:43.940000 audit: BPF prog-id=65 op=UNLOAD Dec 12 18:28:43.940000 audit: BPF prog-id=135 op=LOAD Dec 12 18:28:43.940000 audit: BPF prog-id=136 op=LOAD Dec 12 18:28:43.940000 audit: BPF prog-id=66 op=UNLOAD Dec 12 18:28:43.940000 audit: BPF prog-id=67 op=UNLOAD Dec 12 18:28:44.227227 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:28:44.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:44.244974 (kubelet)[2966]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:28:44.350649 kubelet[2966]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:28:44.351217 kubelet[2966]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:28:44.351333 kubelet[2966]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:28:44.351559 kubelet[2966]: I1212 18:28:44.351502 2966 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:28:44.362384 kubelet[2966]: I1212 18:28:44.362061 2966 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 18:28:44.362565 kubelet[2966]: I1212 18:28:44.362544 2966 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:28:44.363147 kubelet[2966]: I1212 18:28:44.363102 2966 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 18:28:44.367351 kubelet[2966]: I1212 18:28:44.367323 2966 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 12 18:28:44.379274 kubelet[2966]: I1212 18:28:44.378401 2966 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:28:44.392376 kubelet[2966]: I1212 18:28:44.392262 2966 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:28:44.399997 kubelet[2966]: I1212 18:28:44.399758 2966 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 18:28:44.402210 kubelet[2966]: I1212 18:28:44.402140 2966 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:28:44.403002 kubelet[2966]: I1212 18:28:44.402295 2966 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-vv1nl.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:28:44.403296 kubelet[2966]: I1212 18:28:44.403270 2966 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:28:44.404595 kubelet[2966]: I1212 18:28:44.403398 2966 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 18:28:44.404595 kubelet[2966]: I1212 18:28:44.403478 2966 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:28:44.404595 kubelet[2966]: I1212 18:28:44.403736 2966 kubelet.go:446] "Attempting to sync node with API server" Dec 12 18:28:44.404595 kubelet[2966]: I1212 18:28:44.403780 2966 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:28:44.404595 kubelet[2966]: I1212 18:28:44.403823 2966 kubelet.go:352] "Adding apiserver pod source" Dec 12 18:28:44.404595 kubelet[2966]: I1212 18:28:44.403850 2966 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:28:44.406203 kubelet[2966]: I1212 18:28:44.406179 2966 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 18:28:44.406994 kubelet[2966]: I1212 18:28:44.406968 2966 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 18:28:44.422065 kubelet[2966]: I1212 18:28:44.422028 2966 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 18:28:44.422342 kubelet[2966]: I1212 18:28:44.422321 2966 server.go:1287] "Started kubelet" Dec 12 18:28:44.429155 kubelet[2966]: I1212 18:28:44.429127 2966 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:28:44.439757 kubelet[2966]: I1212 18:28:44.439694 2966 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 18:28:44.440237 kubelet[2966]: I1212 18:28:44.440211 2966 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 18:28:44.440439 kubelet[2966]: I1212 18:28:44.440414 2966 reconciler.go:26] "Reconciler: start to sync state" Dec 12 18:28:44.440535 kubelet[2966]: I1212 18:28:44.440385 2966 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:28:44.443182 kubelet[2966]: E1212 18:28:44.441589 2966 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:28:44.457418 kubelet[2966]: I1212 18:28:44.457330 2966 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:28:44.465413 kubelet[2966]: I1212 18:28:44.465385 2966 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:28:44.467420 kubelet[2966]: I1212 18:28:44.458112 2966 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:28:44.468017 kubelet[2966]: I1212 18:28:44.467992 2966 server.go:479] "Adding debug handlers to kubelet server" Dec 12 18:28:44.475885 kubelet[2966]: I1212 18:28:44.475833 2966 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:28:44.483086 kubelet[2966]: I1212 18:28:44.482080 2966 factory.go:221] Registration of the containerd container factory successfully Dec 12 18:28:44.483086 kubelet[2966]: I1212 18:28:44.482185 2966 factory.go:221] Registration of the systemd container factory successfully Dec 12 18:28:44.491756 kubelet[2966]: I1212 18:28:44.491695 2966 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 18:28:44.494099 kubelet[2966]: I1212 18:28:44.493917 2966 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 18:28:44.494099 kubelet[2966]: I1212 18:28:44.493965 2966 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 18:28:44.494099 kubelet[2966]: I1212 18:28:44.493993 2966 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:28:44.494099 kubelet[2966]: I1212 18:28:44.494003 2966 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 18:28:44.496006 kubelet[2966]: E1212 18:28:44.495976 2966 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:28:44.596474 kubelet[2966]: E1212 18:28:44.596376 2966 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 12 18:28:44.638311 kubelet[2966]: I1212 18:28:44.637934 2966 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:28:44.638311 kubelet[2966]: I1212 18:28:44.637967 2966 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:28:44.638311 kubelet[2966]: I1212 18:28:44.638025 2966 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:28:44.639332 kubelet[2966]: I1212 18:28:44.638488 2966 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 18:28:44.639332 kubelet[2966]: I1212 18:28:44.638537 2966 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 18:28:44.639332 kubelet[2966]: I1212 18:28:44.638572 2966 policy_none.go:49] "None policy: Start" Dec 12 18:28:44.639332 kubelet[2966]: I1212 18:28:44.638587 2966 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 18:28:44.639332 kubelet[2966]: I1212 18:28:44.638653 2966 state_mem.go:35] "Initializing new in-memory state store" Dec 12 18:28:44.639332 kubelet[2966]: I1212 18:28:44.638976 2966 state_mem.go:75] "Updated machine memory state" Dec 12 18:28:44.662672 kubelet[2966]: I1212 18:28:44.660359 2966 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 18:28:44.662672 kubelet[2966]: I1212 18:28:44.660892 2966 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:28:44.662672 kubelet[2966]: I1212 18:28:44.660926 2966 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:28:44.662672 kubelet[2966]: I1212 18:28:44.662005 2966 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:28:44.670265 kubelet[2966]: E1212 18:28:44.670233 2966 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:28:44.794568 kubelet[2966]: I1212 18:28:44.794446 2966 kubelet_node_status.go:75] "Attempting to register node" node="srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:44.797895 kubelet[2966]: I1212 18:28:44.797679 2966 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:44.801508 kubelet[2966]: I1212 18:28:44.801479 2966 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:44.802128 kubelet[2966]: I1212 18:28:44.802104 2966 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:44.810144 kubelet[2966]: W1212 18:28:44.809955 2966 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 18:28:44.811533 kubelet[2966]: W1212 18:28:44.811404 2966 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 18:28:44.816131 kubelet[2966]: W1212 18:28:44.816095 2966 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 18:28:44.820313 kubelet[2966]: I1212 18:28:44.820273 2966 kubelet_node_status.go:124] "Node was previously registered" node="srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:44.820455 kubelet[2966]: I1212 18:28:44.820371 2966 kubelet_node_status.go:78] "Successfully registered node" node="srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:44.844014 kubelet[2966]: I1212 18:28:44.843941 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d7da52a5f424100778989806695fda1a-ca-certs\") pod \"kube-apiserver-srv-vv1nl.gb1.brightbox.com\" (UID: \"d7da52a5f424100778989806695fda1a\") " pod="kube-system/kube-apiserver-srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:44.844014 kubelet[2966]: I1212 18:28:44.843995 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d7da52a5f424100778989806695fda1a-k8s-certs\") pod \"kube-apiserver-srv-vv1nl.gb1.brightbox.com\" (UID: \"d7da52a5f424100778989806695fda1a\") " pod="kube-system/kube-apiserver-srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:44.845429 kubelet[2966]: I1212 18:28:44.844034 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55c7f79cd346f6af12ab183a3369d054-ca-certs\") pod \"kube-controller-manager-srv-vv1nl.gb1.brightbox.com\" (UID: \"55c7f79cd346f6af12ab183a3369d054\") " pod="kube-system/kube-controller-manager-srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:44.845429 kubelet[2966]: I1212 18:28:44.844063 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55c7f79cd346f6af12ab183a3369d054-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-vv1nl.gb1.brightbox.com\" (UID: \"55c7f79cd346f6af12ab183a3369d054\") " pod="kube-system/kube-controller-manager-srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:44.845429 kubelet[2966]: I1212 18:28:44.844097 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d7da52a5f424100778989806695fda1a-usr-share-ca-certificates\") pod \"kube-apiserver-srv-vv1nl.gb1.brightbox.com\" (UID: \"d7da52a5f424100778989806695fda1a\") " pod="kube-system/kube-apiserver-srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:44.845429 kubelet[2966]: I1212 18:28:44.844127 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55c7f79cd346f6af12ab183a3369d054-flexvolume-dir\") pod \"kube-controller-manager-srv-vv1nl.gb1.brightbox.com\" (UID: \"55c7f79cd346f6af12ab183a3369d054\") " pod="kube-system/kube-controller-manager-srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:44.845429 kubelet[2966]: I1212 18:28:44.844153 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55c7f79cd346f6af12ab183a3369d054-k8s-certs\") pod \"kube-controller-manager-srv-vv1nl.gb1.brightbox.com\" (UID: \"55c7f79cd346f6af12ab183a3369d054\") " pod="kube-system/kube-controller-manager-srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:44.846181 kubelet[2966]: I1212 18:28:44.844209 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55c7f79cd346f6af12ab183a3369d054-kubeconfig\") pod \"kube-controller-manager-srv-vv1nl.gb1.brightbox.com\" (UID: \"55c7f79cd346f6af12ab183a3369d054\") " pod="kube-system/kube-controller-manager-srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:44.846181 kubelet[2966]: I1212 18:28:44.844295 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/087fd063dd8eadf42328549e1f1e3e9c-kubeconfig\") pod \"kube-scheduler-srv-vv1nl.gb1.brightbox.com\" (UID: \"087fd063dd8eadf42328549e1f1e3e9c\") " pod="kube-system/kube-scheduler-srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:45.419227 kubelet[2966]: I1212 18:28:45.418491 2966 apiserver.go:52] "Watching apiserver" Dec 12 18:28:45.440698 kubelet[2966]: I1212 18:28:45.440604 2966 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 18:28:45.562004 kubelet[2966]: I1212 18:28:45.561945 2966 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:45.562004 kubelet[2966]: I1212 18:28:45.561988 2966 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:45.569702 kubelet[2966]: W1212 18:28:45.569577 2966 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 18:28:45.570424 kubelet[2966]: E1212 18:28:45.569887 2966 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-vv1nl.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:45.577240 kubelet[2966]: W1212 18:28:45.577211 2966 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 18:28:45.577784 kubelet[2966]: E1212 18:28:45.577423 2966 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-vv1nl.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-vv1nl.gb1.brightbox.com" Dec 12 18:28:45.629295 kubelet[2966]: I1212 18:28:45.629194 2966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-vv1nl.gb1.brightbox.com" podStartSLOduration=1.629134289 podStartE2EDuration="1.629134289s" podCreationTimestamp="2025-12-12 18:28:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:28:45.611721881 +0000 UTC m=+1.343539625" watchObservedRunningTime="2025-12-12 18:28:45.629134289 +0000 UTC m=+1.360952001" Dec 12 18:28:45.629720 kubelet[2966]: I1212 18:28:45.629678 2966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-vv1nl.gb1.brightbox.com" podStartSLOduration=1.629661407 podStartE2EDuration="1.629661407s" podCreationTimestamp="2025-12-12 18:28:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:28:45.628423834 +0000 UTC m=+1.360241568" watchObservedRunningTime="2025-12-12 18:28:45.629661407 +0000 UTC m=+1.361479140" Dec 12 18:28:45.669189 kubelet[2966]: I1212 18:28:45.669086 2966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-vv1nl.gb1.brightbox.com" podStartSLOduration=1.669033682 podStartE2EDuration="1.669033682s" podCreationTimestamp="2025-12-12 18:28:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:28:45.649129168 +0000 UTC m=+1.380946890" watchObservedRunningTime="2025-12-12 18:28:45.669033682 +0000 UTC m=+1.400851415" Dec 12 18:28:48.302333 kubelet[2966]: I1212 18:28:48.302283 2966 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 18:28:48.303334 containerd[1664]: time="2025-12-12T18:28:48.303222209Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 18:28:48.303773 kubelet[2966]: I1212 18:28:48.303546 2966 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 18:28:49.200512 systemd[1]: Created slice kubepods-besteffort-pod5f37cb0a_6c39_4871_8642_b103efe657ae.slice - libcontainer container kubepods-besteffort-pod5f37cb0a_6c39_4871_8642_b103efe657ae.slice. Dec 12 18:28:49.275536 kubelet[2966]: I1212 18:28:49.275292 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5f37cb0a-6c39-4871-8642-b103efe657ae-kube-proxy\") pod \"kube-proxy-4jgll\" (UID: \"5f37cb0a-6c39-4871-8642-b103efe657ae\") " pod="kube-system/kube-proxy-4jgll" Dec 12 18:28:49.275536 kubelet[2966]: I1212 18:28:49.275357 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f37cb0a-6c39-4871-8642-b103efe657ae-xtables-lock\") pod \"kube-proxy-4jgll\" (UID: \"5f37cb0a-6c39-4871-8642-b103efe657ae\") " pod="kube-system/kube-proxy-4jgll" Dec 12 18:28:49.275536 kubelet[2966]: I1212 18:28:49.275399 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx6rc\" (UniqueName: \"kubernetes.io/projected/5f37cb0a-6c39-4871-8642-b103efe657ae-kube-api-access-rx6rc\") pod \"kube-proxy-4jgll\" (UID: \"5f37cb0a-6c39-4871-8642-b103efe657ae\") " pod="kube-system/kube-proxy-4jgll" Dec 12 18:28:49.275536 kubelet[2966]: I1212 18:28:49.275433 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f37cb0a-6c39-4871-8642-b103efe657ae-lib-modules\") pod \"kube-proxy-4jgll\" (UID: \"5f37cb0a-6c39-4871-8642-b103efe657ae\") " pod="kube-system/kube-proxy-4jgll" Dec 12 18:28:49.432867 systemd[1]: Created slice kubepods-besteffort-pode0631ae1_5f8a_412a_9c04_ce296669b688.slice - libcontainer container kubepods-besteffort-pode0631ae1_5f8a_412a_9c04_ce296669b688.slice. Dec 12 18:28:49.476671 kubelet[2966]: I1212 18:28:49.476601 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6246\" (UniqueName: \"kubernetes.io/projected/e0631ae1-5f8a-412a-9c04-ce296669b688-kube-api-access-w6246\") pod \"tigera-operator-7dcd859c48-n4ztm\" (UID: \"e0631ae1-5f8a-412a-9c04-ce296669b688\") " pod="tigera-operator/tigera-operator-7dcd859c48-n4ztm" Dec 12 18:28:49.476671 kubelet[2966]: I1212 18:28:49.476669 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e0631ae1-5f8a-412a-9c04-ce296669b688-var-lib-calico\") pod \"tigera-operator-7dcd859c48-n4ztm\" (UID: \"e0631ae1-5f8a-412a-9c04-ce296669b688\") " pod="tigera-operator/tigera-operator-7dcd859c48-n4ztm" Dec 12 18:28:49.514242 containerd[1664]: time="2025-12-12T18:28:49.514176291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4jgll,Uid:5f37cb0a-6c39-4871-8642-b103efe657ae,Namespace:kube-system,Attempt:0,}" Dec 12 18:28:49.541765 containerd[1664]: time="2025-12-12T18:28:49.541696364Z" level=info msg="connecting to shim 7bf3249b23797db8a2284a68cfb80465c74292270be802e4650d05d9e21b784b" address="unix:///run/containerd/s/a5c3b28c73034177fe847f649a3a7b134807129206de7817992b37a65b8f226f" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:28:49.588537 systemd[1]: Started cri-containerd-7bf3249b23797db8a2284a68cfb80465c74292270be802e4650d05d9e21b784b.scope - libcontainer container 7bf3249b23797db8a2284a68cfb80465c74292270be802e4650d05d9e21b784b. Dec 12 18:28:49.620498 kernel: kauditd_printk_skb: 34 callbacks suppressed Dec 12 18:28:49.621966 kernel: audit: type=1334 audit(1765564129.615:438): prog-id=137 op=LOAD Dec 12 18:28:49.622040 kernel: audit: type=1334 audit(1765564129.620:439): prog-id=138 op=LOAD Dec 12 18:28:49.615000 audit: BPF prog-id=137 op=LOAD Dec 12 18:28:49.620000 audit: BPF prog-id=138 op=LOAD Dec 12 18:28:49.620000 audit[3037]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3025 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:49.625278 kernel: audit: type=1300 audit(1765564129.620:439): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3025 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:49.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762663332343962323337393764623861323238346136386366623830 Dec 12 18:28:49.635334 kernel: audit: type=1327 audit(1765564129.620:439): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762663332343962323337393764623861323238346136386366623830 Dec 12 18:28:49.635491 kernel: audit: type=1334 audit(1765564129.620:440): prog-id=138 op=UNLOAD Dec 12 18:28:49.620000 audit: BPF prog-id=138 op=UNLOAD Dec 12 18:28:49.620000 audit[3037]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3025 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:49.643200 kernel: audit: type=1300 audit(1765564129.620:440): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3025 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:49.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762663332343962323337393764623861323238346136386366623830 Dec 12 18:28:49.621000 audit: BPF prog-id=139 op=LOAD Dec 12 18:28:49.654182 kernel: audit: type=1327 audit(1765564129.620:440): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762663332343962323337393764623861323238346136386366623830 Dec 12 18:28:49.654276 kernel: audit: type=1334 audit(1765564129.621:441): prog-id=139 op=LOAD Dec 12 18:28:49.621000 audit[3037]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3025 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:49.656242 kernel: audit: type=1300 audit(1765564129.621:441): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3025 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:49.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762663332343962323337393764623861323238346136386366623830 Dec 12 18:28:49.663526 kernel: audit: type=1327 audit(1765564129.621:441): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762663332343962323337393764623861323238346136386366623830 Dec 12 18:28:49.621000 audit: BPF prog-id=140 op=LOAD Dec 12 18:28:49.621000 audit[3037]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3025 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:49.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762663332343962323337393764623861323238346136386366623830 Dec 12 18:28:49.621000 audit: BPF prog-id=140 op=UNLOAD Dec 12 18:28:49.621000 audit[3037]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3025 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:49.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762663332343962323337393764623861323238346136386366623830 Dec 12 18:28:49.621000 audit: BPF prog-id=139 op=UNLOAD Dec 12 18:28:49.621000 audit[3037]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3025 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:49.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762663332343962323337393764623861323238346136386366623830 Dec 12 18:28:49.621000 audit: BPF prog-id=141 op=LOAD Dec 12 18:28:49.621000 audit[3037]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3025 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:49.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762663332343962323337393764623861323238346136386366623830 Dec 12 18:28:49.694311 containerd[1664]: time="2025-12-12T18:28:49.694131186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4jgll,Uid:5f37cb0a-6c39-4871-8642-b103efe657ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"7bf3249b23797db8a2284a68cfb80465c74292270be802e4650d05d9e21b784b\"" Dec 12 18:28:49.701067 containerd[1664]: time="2025-12-12T18:28:49.700999785Z" level=info msg="CreateContainer within sandbox \"7bf3249b23797db8a2284a68cfb80465c74292270be802e4650d05d9e21b784b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 18:28:49.726129 containerd[1664]: time="2025-12-12T18:28:49.726071341Z" level=info msg="Container b766e9dbbea877449125cee90f2bf48eff136d875df5b3b2105c7fed2c184a3f: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:28:49.740199 containerd[1664]: time="2025-12-12T18:28:49.740045275Z" level=info msg="CreateContainer within sandbox \"7bf3249b23797db8a2284a68cfb80465c74292270be802e4650d05d9e21b784b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b766e9dbbea877449125cee90f2bf48eff136d875df5b3b2105c7fed2c184a3f\"" Dec 12 18:28:49.741457 containerd[1664]: time="2025-12-12T18:28:49.741413897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-n4ztm,Uid:e0631ae1-5f8a-412a-9c04-ce296669b688,Namespace:tigera-operator,Attempt:0,}" Dec 12 18:28:49.742001 containerd[1664]: time="2025-12-12T18:28:49.741970413Z" level=info msg="StartContainer for \"b766e9dbbea877449125cee90f2bf48eff136d875df5b3b2105c7fed2c184a3f\"" Dec 12 18:28:49.744959 containerd[1664]: time="2025-12-12T18:28:49.744856338Z" level=info msg="connecting to shim b766e9dbbea877449125cee90f2bf48eff136d875df5b3b2105c7fed2c184a3f" address="unix:///run/containerd/s/a5c3b28c73034177fe847f649a3a7b134807129206de7817992b37a65b8f226f" protocol=ttrpc version=3 Dec 12 18:28:49.786779 systemd[1]: Started cri-containerd-b766e9dbbea877449125cee90f2bf48eff136d875df5b3b2105c7fed2c184a3f.scope - libcontainer container b766e9dbbea877449125cee90f2bf48eff136d875df5b3b2105c7fed2c184a3f. Dec 12 18:28:49.800154 containerd[1664]: time="2025-12-12T18:28:49.800001765Z" level=info msg="connecting to shim d52c0bc9d1be4b575cab8eeaf515b38bbb7ea870437f8303f354a3a5b8d71334" address="unix:///run/containerd/s/9062ff61233c81c4653e25a878e1fbd3c8a6e4238bac13b3ee085b65f01888be" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:28:49.847658 systemd[1]: Started cri-containerd-d52c0bc9d1be4b575cab8eeaf515b38bbb7ea870437f8303f354a3a5b8d71334.scope - libcontainer container d52c0bc9d1be4b575cab8eeaf515b38bbb7ea870437f8303f354a3a5b8d71334. Dec 12 18:28:49.859000 audit: BPF prog-id=142 op=LOAD Dec 12 18:28:49.859000 audit[3063]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3025 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:49.859000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237363665396462626561383737343439313235636565393066326266 Dec 12 18:28:49.859000 audit: BPF prog-id=143 op=LOAD Dec 12 18:28:49.859000 audit[3063]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=3025 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:49.859000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237363665396462626561383737343439313235636565393066326266 Dec 12 18:28:49.860000 audit: BPF prog-id=143 op=UNLOAD Dec 12 18:28:49.860000 audit[3063]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3025 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:49.860000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237363665396462626561383737343439313235636565393066326266 Dec 12 18:28:49.860000 audit: BPF prog-id=142 op=UNLOAD Dec 12 18:28:49.860000 audit[3063]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3025 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:49.860000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237363665396462626561383737343439313235636565393066326266 Dec 12 18:28:49.860000 audit: BPF prog-id=144 op=LOAD Dec 12 18:28:49.860000 audit[3063]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=3025 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:49.860000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237363665396462626561383737343439313235636565393066326266 Dec 12 18:28:49.887000 audit: BPF prog-id=145 op=LOAD Dec 12 18:28:49.889000 audit: BPF prog-id=146 op=LOAD Dec 12 18:28:49.889000 audit[3103]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3087 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:49.889000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435326330626339643162653462353735636162386565616635313562 Dec 12 18:28:49.889000 audit: BPF prog-id=146 op=UNLOAD Dec 12 18:28:49.889000 audit[3103]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3087 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:49.889000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435326330626339643162653462353735636162386565616635313562 Dec 12 18:28:49.890000 audit: BPF prog-id=147 op=LOAD Dec 12 18:28:49.890000 audit[3103]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3087 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:49.890000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435326330626339643162653462353735636162386565616635313562 Dec 12 18:28:49.891000 audit: BPF prog-id=148 op=LOAD Dec 12 18:28:49.891000 audit[3103]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3087 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:49.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435326330626339643162653462353735636162386565616635313562 Dec 12 18:28:49.891000 audit: BPF prog-id=148 op=UNLOAD Dec 12 18:28:49.891000 audit[3103]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3087 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:49.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435326330626339643162653462353735636162386565616635313562 Dec 12 18:28:49.891000 audit: BPF prog-id=147 op=UNLOAD Dec 12 18:28:49.891000 audit[3103]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3087 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:49.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435326330626339643162653462353735636162386565616635313562 Dec 12 18:28:49.892000 audit: BPF prog-id=149 op=LOAD Dec 12 18:28:49.892000 audit[3103]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3087 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:49.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435326330626339643162653462353735636162386565616635313562 Dec 12 18:28:49.909629 containerd[1664]: time="2025-12-12T18:28:49.909447841Z" level=info msg="StartContainer for \"b766e9dbbea877449125cee90f2bf48eff136d875df5b3b2105c7fed2c184a3f\" returns successfully" Dec 12 18:28:49.971137 containerd[1664]: time="2025-12-12T18:28:49.971063138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-n4ztm,Uid:e0631ae1-5f8a-412a-9c04-ce296669b688,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d52c0bc9d1be4b575cab8eeaf515b38bbb7ea870437f8303f354a3a5b8d71334\"" Dec 12 18:28:49.975040 containerd[1664]: time="2025-12-12T18:28:49.974984725Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 18:28:50.388000 audit[3171]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3171 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:50.388000 audit[3171]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdf31701d0 a2=0 a3=7ffdf31701bc items=0 ppid=3085 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.388000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 18:28:50.391000 audit[3172]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3172 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:50.391000 audit[3172]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe5f6e7890 a2=0 a3=7ffe5f6e787c items=0 ppid=3085 pid=3172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.391000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 18:28:50.393000 audit[3173]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_chain pid=3173 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:50.393000 audit[3173]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcaa46d570 a2=0 a3=7ffcaa46d55c items=0 ppid=3085 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.393000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 18:28:50.394000 audit[3175]: NETFILTER_CFG table=mangle:57 family=10 entries=1 op=nft_register_chain pid=3175 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:50.394000 audit[3175]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffca213f970 a2=0 a3=7ffca213f95c items=0 ppid=3085 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.394000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 18:28:50.397000 audit[3176]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3176 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:50.397000 audit[3176]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcc14ee4c0 a2=0 a3=7ffcc14ee4ac items=0 ppid=3085 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.397000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 18:28:50.399000 audit[3177]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3177 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:50.399000 audit[3177]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd775866d0 a2=0 a3=7ffd775866bc items=0 ppid=3085 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.399000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 18:28:50.414822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount136019186.mount: Deactivated successfully. Dec 12 18:28:50.502000 audit[3178]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3178 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:50.502000 audit[3178]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe3db24750 a2=0 a3=7ffe3db2473c items=0 ppid=3085 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.502000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 18:28:50.508000 audit[3180]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3180 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:50.508000 audit[3180]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffce9c519e0 a2=0 a3=7ffce9c519cc items=0 ppid=3085 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.508000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 12 18:28:50.514000 audit[3183]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:50.514000 audit[3183]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffecb648c40 a2=0 a3=7ffecb648c2c items=0 ppid=3085 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.514000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 12 18:28:50.516000 audit[3184]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3184 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:50.516000 audit[3184]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffecfd9f730 a2=0 a3=7ffecfd9f71c items=0 ppid=3085 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.516000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 18:28:50.521000 audit[3186]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3186 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:50.521000 audit[3186]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff70069050 a2=0 a3=7fff7006903c items=0 ppid=3085 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.521000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 18:28:50.524000 audit[3187]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3187 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:50.524000 audit[3187]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd9e5c1760 a2=0 a3=7ffd9e5c174c items=0 ppid=3085 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.524000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 18:28:50.534000 audit[3189]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3189 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:50.534000 audit[3189]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffce5efe040 a2=0 a3=7ffce5efe02c items=0 ppid=3085 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.534000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 12 18:28:50.541000 audit[3192]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3192 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:50.541000 audit[3192]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffef61535b0 a2=0 a3=7ffef615359c items=0 ppid=3085 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.541000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 12 18:28:50.543000 audit[3193]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3193 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:50.543000 audit[3193]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd3982ec20 a2=0 a3=7ffd3982ec0c items=0 ppid=3085 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.543000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 18:28:50.547000 audit[3195]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3195 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:50.547000 audit[3195]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff40551e70 a2=0 a3=7fff40551e5c items=0 ppid=3085 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.547000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 18:28:50.549000 audit[3196]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3196 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:50.549000 audit[3196]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff707650d0 a2=0 a3=7fff707650bc items=0 ppid=3085 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.549000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 18:28:50.554000 audit[3198]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3198 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:50.554000 audit[3198]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc23ef0530 a2=0 a3=7ffc23ef051c items=0 ppid=3085 pid=3198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.554000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 18:28:50.561000 audit[3201]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3201 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:50.561000 audit[3201]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe36e7fe40 a2=0 a3=7ffe36e7fe2c items=0 ppid=3085 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.561000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 18:28:50.567000 audit[3204]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3204 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:50.567000 audit[3204]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe467514a0 a2=0 a3=7ffe4675148c items=0 ppid=3085 pid=3204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.567000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 12 18:28:50.569000 audit[3205]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3205 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:50.569000 audit[3205]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc0c752d10 a2=0 a3=7ffc0c752cfc items=0 ppid=3085 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.569000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 18:28:50.578000 audit[3207]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3207 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:50.578000 audit[3207]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffead760140 a2=0 a3=7ffead76012c items=0 ppid=3085 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.578000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 18:28:50.604000 audit[3210]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3210 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:50.604000 audit[3210]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd282c6e70 a2=0 a3=7ffd282c6e5c items=0 ppid=3085 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.604000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 18:28:50.610000 audit[3211]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3211 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:50.610000 audit[3211]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe8cef8880 a2=0 a3=7ffe8cef886c items=0 ppid=3085 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.610000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 18:28:50.613660 kubelet[2966]: I1212 18:28:50.613500 2966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4jgll" podStartSLOduration=1.6134736950000002 podStartE2EDuration="1.613473695s" podCreationTimestamp="2025-12-12 18:28:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:28:50.598966898 +0000 UTC m=+6.330784627" watchObservedRunningTime="2025-12-12 18:28:50.613473695 +0000 UTC m=+6.345291417" Dec 12 18:28:50.617000 audit[3213]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3213 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:28:50.617000 audit[3213]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffcefa52ce0 a2=0 a3=7ffcefa52ccc items=0 ppid=3085 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.617000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 18:28:50.650000 audit[3219]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3219 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:28:50.650000 audit[3219]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffff6ed2cc0 a2=0 a3=7ffff6ed2cac items=0 ppid=3085 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.650000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:28:50.659000 audit[3219]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3219 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:28:50.659000 audit[3219]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffff6ed2cc0 a2=0 a3=7ffff6ed2cac items=0 ppid=3085 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.659000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:28:50.662000 audit[3224]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3224 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:50.662000 audit[3224]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffcb10246c0 a2=0 a3=7ffcb10246ac items=0 ppid=3085 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.662000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 18:28:50.667000 audit[3226]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3226 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:50.667000 audit[3226]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffcd0aa1550 a2=0 a3=7ffcd0aa153c items=0 ppid=3085 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.667000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 12 18:28:50.673000 audit[3229]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3229 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:50.673000 audit[3229]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc93248d10 a2=0 a3=7ffc93248cfc items=0 ppid=3085 pid=3229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.673000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 12 18:28:50.676000 audit[3230]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3230 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:50.676000 audit[3230]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc84c12e20 a2=0 a3=7ffc84c12e0c items=0 ppid=3085 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.676000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 18:28:50.680000 audit[3232]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3232 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:50.680000 audit[3232]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd500b9dd0 a2=0 a3=7ffd500b9dbc items=0 ppid=3085 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.680000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 18:28:50.682000 audit[3233]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3233 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:50.682000 audit[3233]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffca5edf250 a2=0 a3=7ffca5edf23c items=0 ppid=3085 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.682000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 18:28:50.688000 audit[3235]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3235 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:50.688000 audit[3235]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdc3c4d2f0 a2=0 a3=7ffdc3c4d2dc items=0 ppid=3085 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.688000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 12 18:28:50.695000 audit[3238]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3238 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:50.695000 audit[3238]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffda8c2de10 a2=0 a3=7ffda8c2ddfc items=0 ppid=3085 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.695000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 12 18:28:50.697000 audit[3239]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3239 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:50.697000 audit[3239]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff8b755fb0 a2=0 a3=7fff8b755f9c items=0 ppid=3085 pid=3239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.697000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 18:28:50.702000 audit[3241]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3241 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:50.702000 audit[3241]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd4e7d52b0 a2=0 a3=7ffd4e7d529c items=0 ppid=3085 pid=3241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.702000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 18:28:50.705000 audit[3242]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3242 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:50.705000 audit[3242]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffcbab5960 a2=0 a3=7fffcbab594c items=0 ppid=3085 pid=3242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.705000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 18:28:50.710000 audit[3244]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3244 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:50.710000 audit[3244]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeee1630d0 a2=0 a3=7ffeee1630bc items=0 ppid=3085 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.710000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 18:28:50.718000 audit[3247]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3247 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:50.718000 audit[3247]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffda018c690 a2=0 a3=7ffda018c67c items=0 ppid=3085 pid=3247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.718000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 12 18:28:50.725000 audit[3250]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3250 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:50.725000 audit[3250]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff80031c40 a2=0 a3=7fff80031c2c items=0 ppid=3085 pid=3250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.725000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 12 18:28:50.727000 audit[3251]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3251 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:50.727000 audit[3251]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc5d52a1a0 a2=0 a3=7ffc5d52a18c items=0 ppid=3085 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.727000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 18:28:50.731000 audit[3253]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3253 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:50.731000 audit[3253]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fffd132b0f0 a2=0 a3=7fffd132b0dc items=0 ppid=3085 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.731000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 18:28:50.738000 audit[3256]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3256 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:50.738000 audit[3256]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffea724b810 a2=0 a3=7ffea724b7fc items=0 ppid=3085 pid=3256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.738000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 18:28:50.741000 audit[3257]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3257 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:50.741000 audit[3257]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff09058460 a2=0 a3=7fff0905844c items=0 ppid=3085 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.741000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 18:28:50.745000 audit[3259]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3259 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:50.745000 audit[3259]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd7821fef0 a2=0 a3=7ffd7821fedc items=0 ppid=3085 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.745000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 18:28:50.747000 audit[3260]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3260 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:50.747000 audit[3260]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcdbc9f090 a2=0 a3=7ffcdbc9f07c items=0 ppid=3085 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.747000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 18:28:50.753000 audit[3262]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3262 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:50.753000 audit[3262]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff65ee50b0 a2=0 a3=7fff65ee509c items=0 ppid=3085 pid=3262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.753000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 18:28:50.759000 audit[3265]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3265 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:28:50.759000 audit[3265]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdd440daa0 a2=0 a3=7ffdd440da8c items=0 ppid=3085 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.759000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 18:28:50.764000 audit[3267]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3267 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 18:28:50.764000 audit[3267]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffd510f6200 a2=0 a3=7ffd510f61ec items=0 ppid=3085 pid=3267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.764000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:28:50.765000 audit[3267]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3267 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 18:28:50.765000 audit[3267]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffd510f6200 a2=0 a3=7ffd510f61ec items=0 ppid=3085 pid=3267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:50.765000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:28:52.437846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4128123023.mount: Deactivated successfully. Dec 12 18:28:53.958671 containerd[1664]: time="2025-12-12T18:28:53.958175261Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Dec 12 18:28:53.962227 containerd[1664]: time="2025-12-12T18:28:53.961473696Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.986323023s" Dec 12 18:28:53.962227 containerd[1664]: time="2025-12-12T18:28:53.961521292Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 12 18:28:53.969147 containerd[1664]: time="2025-12-12T18:28:53.968084183Z" level=info msg="CreateContainer within sandbox \"d52c0bc9d1be4b575cab8eeaf515b38bbb7ea870437f8303f354a3a5b8d71334\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 18:28:53.985043 containerd[1664]: time="2025-12-12T18:28:53.984982781Z" level=info msg="Container 607028973718fca12498f5ae45a8629307cb80871de8dc34f87e306b7c2585af: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:28:54.009694 containerd[1664]: time="2025-12-12T18:28:54.009610779Z" level=info msg="CreateContainer within sandbox \"d52c0bc9d1be4b575cab8eeaf515b38bbb7ea870437f8303f354a3a5b8d71334\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"607028973718fca12498f5ae45a8629307cb80871de8dc34f87e306b7c2585af\"" Dec 12 18:28:54.010538 containerd[1664]: time="2025-12-12T18:28:54.010488863Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:28:54.011503 containerd[1664]: time="2025-12-12T18:28:54.011464355Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:28:54.012120 containerd[1664]: time="2025-12-12T18:28:54.012085364Z" level=info msg="StartContainer for \"607028973718fca12498f5ae45a8629307cb80871de8dc34f87e306b7c2585af\"" Dec 12 18:28:54.013346 containerd[1664]: time="2025-12-12T18:28:54.013310583Z" level=info msg="connecting to shim 607028973718fca12498f5ae45a8629307cb80871de8dc34f87e306b7c2585af" address="unix:///run/containerd/s/9062ff61233c81c4653e25a878e1fbd3c8a6e4238bac13b3ee085b65f01888be" protocol=ttrpc version=3 Dec 12 18:28:54.015776 containerd[1664]: time="2025-12-12T18:28:54.015740158Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:28:54.061387 systemd[1]: Started cri-containerd-607028973718fca12498f5ae45a8629307cb80871de8dc34f87e306b7c2585af.scope - libcontainer container 607028973718fca12498f5ae45a8629307cb80871de8dc34f87e306b7c2585af. Dec 12 18:28:54.082000 audit: BPF prog-id=150 op=LOAD Dec 12 18:28:54.082000 audit: BPF prog-id=151 op=LOAD Dec 12 18:28:54.082000 audit[3276]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=3087 pid=3276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:54.082000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630373032383937333731386663613132343938663561653435613836 Dec 12 18:28:54.082000 audit: BPF prog-id=151 op=UNLOAD Dec 12 18:28:54.082000 audit[3276]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3087 pid=3276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:54.082000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630373032383937333731386663613132343938663561653435613836 Dec 12 18:28:54.083000 audit: BPF prog-id=152 op=LOAD Dec 12 18:28:54.083000 audit[3276]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3087 pid=3276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:54.083000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630373032383937333731386663613132343938663561653435613836 Dec 12 18:28:54.083000 audit: BPF prog-id=153 op=LOAD Dec 12 18:28:54.083000 audit[3276]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3087 pid=3276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:54.083000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630373032383937333731386663613132343938663561653435613836 Dec 12 18:28:54.083000 audit: BPF prog-id=153 op=UNLOAD Dec 12 18:28:54.083000 audit[3276]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3087 pid=3276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:54.083000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630373032383937333731386663613132343938663561653435613836 Dec 12 18:28:54.084000 audit: BPF prog-id=152 op=UNLOAD Dec 12 18:28:54.084000 audit[3276]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3087 pid=3276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:54.084000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630373032383937333731386663613132343938663561653435613836 Dec 12 18:28:54.084000 audit: BPF prog-id=154 op=LOAD Dec 12 18:28:54.084000 audit[3276]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3087 pid=3276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:54.084000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630373032383937333731386663613132343938663561653435613836 Dec 12 18:28:54.112816 containerd[1664]: time="2025-12-12T18:28:54.112746593Z" level=info msg="StartContainer for \"607028973718fca12498f5ae45a8629307cb80871de8dc34f87e306b7c2585af\" returns successfully" Dec 12 18:28:57.928212 kubelet[2966]: I1212 18:28:57.927351 2966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-n4ztm" podStartSLOduration=4.934943706 podStartE2EDuration="8.92732451s" podCreationTimestamp="2025-12-12 18:28:49 +0000 UTC" firstStartedPulling="2025-12-12 18:28:49.973984611 +0000 UTC m=+5.705802317" lastFinishedPulling="2025-12-12 18:28:53.966365414 +0000 UTC m=+9.698183121" observedRunningTime="2025-12-12 18:28:54.613475514 +0000 UTC m=+10.345293246" watchObservedRunningTime="2025-12-12 18:28:57.92732451 +0000 UTC m=+13.659142220" Dec 12 18:29:01.866635 sudo[1973]: pam_unix(sudo:session): session closed for user root Dec 12 18:29:01.878235 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 12 18:29:01.878633 kernel: audit: type=1106 audit(1765564141.865:518): pid=1973 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:29:01.865000 audit[1973]: USER_END pid=1973 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:29:01.865000 audit[1973]: CRED_DISP pid=1973 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:29:01.885205 kernel: audit: type=1104 audit(1765564141.865:519): pid=1973 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:29:02.023394 sshd[1960]: Connection closed by 139.178.89.65 port 58174 Dec 12 18:29:02.025407 sshd-session[1954]: pam_unix(sshd:session): session closed for user core Dec 12 18:29:02.029000 audit[1954]: USER_END pid=1954 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:29:02.038193 kernel: audit: type=1106 audit(1765564142.029:520): pid=1954 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:29:02.041925 systemd[1]: sshd@8-10.230.23.66:22-139.178.89.65:58174.service: Deactivated successfully. Dec 12 18:29:02.048541 kernel: audit: type=1104 audit(1765564142.029:521): pid=1954 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:29:02.029000 audit[1954]: CRED_DISP pid=1954 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:29:02.049548 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 18:29:02.050239 systemd[1]: session-11.scope: Consumed 6.804s CPU time, 155.9M memory peak. Dec 12 18:29:02.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.23.66:22-139.178.89.65:58174 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:29:02.056179 kernel: audit: type=1131 audit(1765564142.041:522): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.23.66:22-139.178.89.65:58174 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:29:02.057395 systemd-logind[1633]: Session 11 logged out. Waiting for processes to exit. Dec 12 18:29:02.061840 systemd-logind[1633]: Removed session 11. Dec 12 18:29:03.101000 audit[3357]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3357 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:03.111705 kernel: audit: type=1325 audit(1765564143.101:523): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3357 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:03.101000 audit[3357]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcdd90fe40 a2=0 a3=7ffcdd90fe2c items=0 ppid=3085 pid=3357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:03.122186 kernel: audit: type=1300 audit(1765564143.101:523): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcdd90fe40 a2=0 a3=7ffcdd90fe2c items=0 ppid=3085 pid=3357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:03.101000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:03.129198 kernel: audit: type=1327 audit(1765564143.101:523): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:03.130000 audit[3357]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3357 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:03.135272 kernel: audit: type=1325 audit(1765564143.130:524): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3357 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:03.130000 audit[3357]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcdd90fe40 a2=0 a3=0 items=0 ppid=3085 pid=3357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:03.143216 kernel: audit: type=1300 audit(1765564143.130:524): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcdd90fe40 a2=0 a3=0 items=0 ppid=3085 pid=3357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:03.130000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:03.158000 audit[3359]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3359 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:03.158000 audit[3359]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd013ba250 a2=0 a3=7ffd013ba23c items=0 ppid=3085 pid=3359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:03.158000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:03.162000 audit[3359]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3359 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:03.162000 audit[3359]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd013ba250 a2=0 a3=0 items=0 ppid=3085 pid=3359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:03.162000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:06.020000 audit[3362]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3362 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:06.020000 audit[3362]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffcea43aa40 a2=0 a3=7ffcea43aa2c items=0 ppid=3085 pid=3362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:06.020000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:06.025000 audit[3362]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3362 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:06.025000 audit[3362]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcea43aa40 a2=0 a3=0 items=0 ppid=3085 pid=3362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:06.025000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:06.049000 audit[3364]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3364 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:06.049000 audit[3364]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc9a8a39a0 a2=0 a3=7ffc9a8a398c items=0 ppid=3085 pid=3364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:06.049000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:06.054000 audit[3364]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3364 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:06.054000 audit[3364]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc9a8a39a0 a2=0 a3=0 items=0 ppid=3085 pid=3364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:06.054000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:07.099216 kernel: kauditd_printk_skb: 19 callbacks suppressed Dec 12 18:29:07.099390 kernel: audit: type=1325 audit(1765564147.092:531): table=filter:113 family=2 entries=19 op=nft_register_rule pid=3366 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:07.092000 audit[3366]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3366 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:07.092000 audit[3366]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdbfc979f0 a2=0 a3=7ffdbfc979dc items=0 ppid=3085 pid=3366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:07.092000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:07.118728 kernel: audit: type=1300 audit(1765564147.092:531): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdbfc979f0 a2=0 a3=7ffdbfc979dc items=0 ppid=3085 pid=3366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:07.118860 kernel: audit: type=1327 audit(1765564147.092:531): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:07.105000 audit[3366]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3366 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:07.127291 kernel: audit: type=1325 audit(1765564147.105:532): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3366 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:07.105000 audit[3366]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdbfc979f0 a2=0 a3=0 items=0 ppid=3085 pid=3366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:07.105000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:07.137949 kernel: audit: type=1300 audit(1765564147.105:532): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdbfc979f0 a2=0 a3=0 items=0 ppid=3085 pid=3366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:07.138134 kernel: audit: type=1327 audit(1765564147.105:532): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:08.197846 systemd[1]: Created slice kubepods-besteffort-pod69a04df4_69f9_4348_8e5f_051c55bcb954.slice - libcontainer container kubepods-besteffort-pod69a04df4_69f9_4348_8e5f_051c55bcb954.slice. Dec 12 18:29:08.231962 kubelet[2966]: I1212 18:29:08.231747 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69a04df4-69f9-4348-8e5f-051c55bcb954-tigera-ca-bundle\") pod \"calico-typha-758d84bbf7-96c4g\" (UID: \"69a04df4-69f9-4348-8e5f-051c55bcb954\") " pod="calico-system/calico-typha-758d84bbf7-96c4g" Dec 12 18:29:08.231962 kubelet[2966]: I1212 18:29:08.231821 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/69a04df4-69f9-4348-8e5f-051c55bcb954-typha-certs\") pod \"calico-typha-758d84bbf7-96c4g\" (UID: \"69a04df4-69f9-4348-8e5f-051c55bcb954\") " pod="calico-system/calico-typha-758d84bbf7-96c4g" Dec 12 18:29:08.231962 kubelet[2966]: I1212 18:29:08.231851 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf878\" (UniqueName: \"kubernetes.io/projected/69a04df4-69f9-4348-8e5f-051c55bcb954-kube-api-access-cf878\") pod \"calico-typha-758d84bbf7-96c4g\" (UID: \"69a04df4-69f9-4348-8e5f-051c55bcb954\") " pod="calico-system/calico-typha-758d84bbf7-96c4g" Dec 12 18:29:08.285000 audit[3368]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3368 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:08.293340 kernel: audit: type=1325 audit(1765564148.285:533): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3368 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:08.285000 audit[3368]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff950a13f0 a2=0 a3=7fff950a13dc items=0 ppid=3085 pid=3368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:08.301267 kernel: audit: type=1300 audit(1765564148.285:533): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff950a13f0 a2=0 a3=7fff950a13dc items=0 ppid=3085 pid=3368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:08.285000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:08.306221 kernel: audit: type=1327 audit(1765564148.285:533): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:08.293000 audit[3368]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3368 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:08.310337 kernel: audit: type=1325 audit(1765564148.293:534): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3368 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:08.293000 audit[3368]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff950a13f0 a2=0 a3=0 items=0 ppid=3085 pid=3368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:08.293000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:08.417587 systemd[1]: Created slice kubepods-besteffort-pod9806054c_9ee6_4d2d_9bc9_ed9417b7a616.slice - libcontainer container kubepods-besteffort-pod9806054c_9ee6_4d2d_9bc9_ed9417b7a616.slice. Dec 12 18:29:08.517181 containerd[1664]: time="2025-12-12T18:29:08.517052983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-758d84bbf7-96c4g,Uid:69a04df4-69f9-4348-8e5f-051c55bcb954,Namespace:calico-system,Attempt:0,}" Dec 12 18:29:08.534215 kubelet[2966]: I1212 18:29:08.533585 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9806054c-9ee6-4d2d-9bc9-ed9417b7a616-cni-bin-dir\") pod \"calico-node-7htd9\" (UID: \"9806054c-9ee6-4d2d-9bc9-ed9417b7a616\") " pod="calico-system/calico-node-7htd9" Dec 12 18:29:08.534215 kubelet[2966]: I1212 18:29:08.533648 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9806054c-9ee6-4d2d-9bc9-ed9417b7a616-cni-log-dir\") pod \"calico-node-7htd9\" (UID: \"9806054c-9ee6-4d2d-9bc9-ed9417b7a616\") " pod="calico-system/calico-node-7htd9" Dec 12 18:29:08.534215 kubelet[2966]: I1212 18:29:08.533680 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9806054c-9ee6-4d2d-9bc9-ed9417b7a616-var-run-calico\") pod \"calico-node-7htd9\" (UID: \"9806054c-9ee6-4d2d-9bc9-ed9417b7a616\") " pod="calico-system/calico-node-7htd9" Dec 12 18:29:08.534215 kubelet[2966]: I1212 18:29:08.533794 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gqqc\" (UniqueName: \"kubernetes.io/projected/9806054c-9ee6-4d2d-9bc9-ed9417b7a616-kube-api-access-7gqqc\") pod \"calico-node-7htd9\" (UID: \"9806054c-9ee6-4d2d-9bc9-ed9417b7a616\") " pod="calico-system/calico-node-7htd9" Dec 12 18:29:08.534215 kubelet[2966]: I1212 18:29:08.533857 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9806054c-9ee6-4d2d-9bc9-ed9417b7a616-policysync\") pod \"calico-node-7htd9\" (UID: \"9806054c-9ee6-4d2d-9bc9-ed9417b7a616\") " pod="calico-system/calico-node-7htd9" Dec 12 18:29:08.534697 kubelet[2966]: I1212 18:29:08.533919 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9806054c-9ee6-4d2d-9bc9-ed9417b7a616-tigera-ca-bundle\") pod \"calico-node-7htd9\" (UID: \"9806054c-9ee6-4d2d-9bc9-ed9417b7a616\") " pod="calico-system/calico-node-7htd9" Dec 12 18:29:08.534697 kubelet[2966]: I1212 18:29:08.533948 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9806054c-9ee6-4d2d-9bc9-ed9417b7a616-cni-net-dir\") pod \"calico-node-7htd9\" (UID: \"9806054c-9ee6-4d2d-9bc9-ed9417b7a616\") " pod="calico-system/calico-node-7htd9" Dec 12 18:29:08.534697 kubelet[2966]: I1212 18:29:08.533974 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9806054c-9ee6-4d2d-9bc9-ed9417b7a616-lib-modules\") pod \"calico-node-7htd9\" (UID: \"9806054c-9ee6-4d2d-9bc9-ed9417b7a616\") " pod="calico-system/calico-node-7htd9" Dec 12 18:29:08.534697 kubelet[2966]: I1212 18:29:08.534043 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9806054c-9ee6-4d2d-9bc9-ed9417b7a616-node-certs\") pod \"calico-node-7htd9\" (UID: \"9806054c-9ee6-4d2d-9bc9-ed9417b7a616\") " pod="calico-system/calico-node-7htd9" Dec 12 18:29:08.534697 kubelet[2966]: I1212 18:29:08.534076 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9806054c-9ee6-4d2d-9bc9-ed9417b7a616-var-lib-calico\") pod \"calico-node-7htd9\" (UID: \"9806054c-9ee6-4d2d-9bc9-ed9417b7a616\") " pod="calico-system/calico-node-7htd9" Dec 12 18:29:08.538013 kubelet[2966]: I1212 18:29:08.534101 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9806054c-9ee6-4d2d-9bc9-ed9417b7a616-xtables-lock\") pod \"calico-node-7htd9\" (UID: \"9806054c-9ee6-4d2d-9bc9-ed9417b7a616\") " pod="calico-system/calico-node-7htd9" Dec 12 18:29:08.538261 kubelet[2966]: I1212 18:29:08.538234 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9806054c-9ee6-4d2d-9bc9-ed9417b7a616-flexvol-driver-host\") pod \"calico-node-7htd9\" (UID: \"9806054c-9ee6-4d2d-9bc9-ed9417b7a616\") " pod="calico-system/calico-node-7htd9" Dec 12 18:29:08.542296 kubelet[2966]: E1212 18:29:08.542198 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzv84" podUID="30dcceea-b67a-4ecb-b6c6-16baeb5ae67c" Dec 12 18:29:08.613408 containerd[1664]: time="2025-12-12T18:29:08.613345110Z" level=info msg="connecting to shim b7a505169b0f14f7d7f17e61e711e226165868ef788da5c0791be9062fc5d079" address="unix:///run/containerd/s/8e273affb75e86d480da0a5c29e6eaa53187ff69cfbd2e8ca78c1fb9b5c9b19d" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:29:08.643369 kubelet[2966]: I1212 18:29:08.643310 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/30dcceea-b67a-4ecb-b6c6-16baeb5ae67c-socket-dir\") pod \"csi-node-driver-mzv84\" (UID: \"30dcceea-b67a-4ecb-b6c6-16baeb5ae67c\") " pod="calico-system/csi-node-driver-mzv84" Dec 12 18:29:08.643521 kubelet[2966]: I1212 18:29:08.643431 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/30dcceea-b67a-4ecb-b6c6-16baeb5ae67c-varrun\") pod \"csi-node-driver-mzv84\" (UID: \"30dcceea-b67a-4ecb-b6c6-16baeb5ae67c\") " pod="calico-system/csi-node-driver-mzv84" Dec 12 18:29:08.643521 kubelet[2966]: I1212 18:29:08.643469 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9hcc\" (UniqueName: \"kubernetes.io/projected/30dcceea-b67a-4ecb-b6c6-16baeb5ae67c-kube-api-access-j9hcc\") pod \"csi-node-driver-mzv84\" (UID: \"30dcceea-b67a-4ecb-b6c6-16baeb5ae67c\") " pod="calico-system/csi-node-driver-mzv84" Dec 12 18:29:08.643628 kubelet[2966]: I1212 18:29:08.643591 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/30dcceea-b67a-4ecb-b6c6-16baeb5ae67c-kubelet-dir\") pod \"csi-node-driver-mzv84\" (UID: \"30dcceea-b67a-4ecb-b6c6-16baeb5ae67c\") " pod="calico-system/csi-node-driver-mzv84" Dec 12 18:29:08.643702 kubelet[2966]: I1212 18:29:08.643670 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/30dcceea-b67a-4ecb-b6c6-16baeb5ae67c-registration-dir\") pod \"csi-node-driver-mzv84\" (UID: \"30dcceea-b67a-4ecb-b6c6-16baeb5ae67c\") " pod="calico-system/csi-node-driver-mzv84" Dec 12 18:29:08.670186 kubelet[2966]: E1212 18:29:08.670080 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:08.670186 kubelet[2966]: W1212 18:29:08.670135 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:08.671998 kubelet[2966]: E1212 18:29:08.671874 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:08.694432 systemd[1]: Started cri-containerd-b7a505169b0f14f7d7f17e61e711e226165868ef788da5c0791be9062fc5d079.scope - libcontainer container b7a505169b0f14f7d7f17e61e711e226165868ef788da5c0791be9062fc5d079. Dec 12 18:29:08.725430 kubelet[2966]: E1212 18:29:08.725383 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:08.725688 kubelet[2966]: W1212 18:29:08.725643 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:08.726233 kubelet[2966]: E1212 18:29:08.726204 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:08.730133 containerd[1664]: time="2025-12-12T18:29:08.730065925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7htd9,Uid:9806054c-9ee6-4d2d-9bc9-ed9417b7a616,Namespace:calico-system,Attempt:0,}" Dec 12 18:29:08.745052 kubelet[2966]: E1212 18:29:08.745002 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:08.745052 kubelet[2966]: W1212 18:29:08.745039 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:08.746137 kubelet[2966]: E1212 18:29:08.745070 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:08.746137 kubelet[2966]: E1212 18:29:08.745806 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:08.746137 kubelet[2966]: W1212 18:29:08.745833 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:08.746137 kubelet[2966]: E1212 18:29:08.745850 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:08.748230 kubelet[2966]: E1212 18:29:08.748199 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:08.748230 kubelet[2966]: W1212 18:29:08.748225 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:08.748519 kubelet[2966]: E1212 18:29:08.748253 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:08.748684 kubelet[2966]: E1212 18:29:08.748661 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:08.748684 kubelet[2966]: W1212 18:29:08.748681 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:08.748892 kubelet[2966]: E1212 18:29:08.748767 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:08.749345 kubelet[2966]: E1212 18:29:08.749274 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:08.749667 kubelet[2966]: W1212 18:29:08.749346 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:08.749793 kubelet[2966]: E1212 18:29:08.749769 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:08.752426 kubelet[2966]: E1212 18:29:08.752392 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:08.752426 kubelet[2966]: W1212 18:29:08.752422 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:08.752654 kubelet[2966]: E1212 18:29:08.752462 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:08.753063 kubelet[2966]: E1212 18:29:08.753028 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:08.753063 kubelet[2966]: W1212 18:29:08.753049 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:08.753748 kubelet[2966]: E1212 18:29:08.753152 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:08.754670 kubelet[2966]: E1212 18:29:08.754639 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:08.754765 kubelet[2966]: W1212 18:29:08.754719 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:08.755141 kubelet[2966]: E1212 18:29:08.754969 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:08.757240 kubelet[2966]: E1212 18:29:08.757210 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:08.757240 kubelet[2966]: W1212 18:29:08.757236 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:08.757606 kubelet[2966]: E1212 18:29:08.757505 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:08.757606 kubelet[2966]: W1212 18:29:08.757557 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:08.757916 kubelet[2966]: E1212 18:29:08.757850 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:08.757916 kubelet[2966]: W1212 18:29:08.757889 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:08.758594 kubelet[2966]: E1212 18:29:08.758536 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:08.758594 kubelet[2966]: W1212 18:29:08.758557 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:08.761206 kubelet[2966]: E1212 18:29:08.760017 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:08.761206 kubelet[2966]: E1212 18:29:08.760060 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:08.761206 kubelet[2966]: E1212 18:29:08.760077 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:08.761206 kubelet[2966]: E1212 18:29:08.760090 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:08.761206 kubelet[2966]: E1212 18:29:08.760155 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:08.761206 kubelet[2966]: W1212 18:29:08.760652 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:08.761206 kubelet[2966]: E1212 18:29:08.760671 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:08.761918 kubelet[2966]: E1212 18:29:08.761889 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:08.761918 kubelet[2966]: W1212 18:29:08.761913 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:08.762267 kubelet[2966]: E1212 18:29:08.761934 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:08.762944 kubelet[2966]: E1212 18:29:08.762842 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:08.762944 kubelet[2966]: W1212 18:29:08.762863 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:08.763454 kubelet[2966]: E1212 18:29:08.763077 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:08.763760 kubelet[2966]: E1212 18:29:08.763603 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:08.763760 kubelet[2966]: W1212 18:29:08.763658 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:08.764453 kubelet[2966]: E1212 18:29:08.763937 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:08.764453 kubelet[2966]: W1212 18:29:08.763971 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:08.765288 kubelet[2966]: E1212 18:29:08.764843 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:08.765288 kubelet[2966]: W1212 18:29:08.764858 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:08.766549 kubelet[2966]: E1212 18:29:08.765434 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:08.766549 kubelet[2966]: W1212 18:29:08.765455 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:08.766549 kubelet[2966]: E1212 18:29:08.765471 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:08.766549 kubelet[2966]: E1212 18:29:08.765884 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:08.766549 kubelet[2966]: W1212 18:29:08.765898 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:08.766549 kubelet[2966]: E1212 18:29:08.765936 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:08.766549 kubelet[2966]: E1212 18:29:08.765974 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:08.766549 kubelet[2966]: E1212 18:29:08.766010 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:08.766549 kubelet[2966]: E1212 18:29:08.766038 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:08.769533 kubelet[2966]: E1212 18:29:08.769196 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:08.769533 kubelet[2966]: W1212 18:29:08.769224 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:08.769533 kubelet[2966]: E1212 18:29:08.769245 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:08.772421 kubelet[2966]: E1212 18:29:08.770450 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:08.772421 kubelet[2966]: W1212 18:29:08.770472 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:08.772421 kubelet[2966]: E1212 18:29:08.772213 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:08.773002 kubelet[2966]: E1212 18:29:08.772977 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:08.773267 kubelet[2966]: W1212 18:29:08.773126 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:08.773552 kubelet[2966]: E1212 18:29:08.773392 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:08.774703 kubelet[2966]: E1212 18:29:08.774671 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:08.774703 kubelet[2966]: W1212 18:29:08.774695 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:08.775134 kubelet[2966]: E1212 18:29:08.774725 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:08.777076 kubelet[2966]: E1212 18:29:08.775524 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:08.777294 kubelet[2966]: W1212 18:29:08.777219 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:08.777294 kubelet[2966]: E1212 18:29:08.777249 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:08.805000 audit: BPF prog-id=155 op=LOAD Dec 12 18:29:08.810000 audit: BPF prog-id=156 op=LOAD Dec 12 18:29:08.810000 audit[3390]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3380 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:08.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237613530353136396230663134663764376631376536316537313165 Dec 12 18:29:08.811000 audit: BPF prog-id=156 op=UNLOAD Dec 12 18:29:08.811000 audit[3390]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3380 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:08.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237613530353136396230663134663764376631376536316537313165 Dec 12 18:29:08.815000 audit: BPF prog-id=157 op=LOAD Dec 12 18:29:08.815000 audit[3390]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3380 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:08.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237613530353136396230663134663764376631376536316537313165 Dec 12 18:29:08.818000 audit: BPF prog-id=158 op=LOAD Dec 12 18:29:08.818000 audit[3390]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3380 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:08.818000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237613530353136396230663134663764376631376536316537313165 Dec 12 18:29:08.818000 audit: BPF prog-id=158 op=UNLOAD Dec 12 18:29:08.818000 audit[3390]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3380 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:08.818000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237613530353136396230663134663764376631376536316537313165 Dec 12 18:29:08.818000 audit: BPF prog-id=157 op=UNLOAD Dec 12 18:29:08.818000 audit[3390]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3380 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:08.818000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237613530353136396230663134663764376631376536316537313165 Dec 12 18:29:08.818000 audit: BPF prog-id=159 op=LOAD Dec 12 18:29:08.818000 audit[3390]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3380 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:08.818000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237613530353136396230663134663764376631376536316537313165 Dec 12 18:29:08.834634 kubelet[2966]: E1212 18:29:08.834444 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:08.834634 kubelet[2966]: W1212 18:29:08.834475 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:08.834634 kubelet[2966]: E1212 18:29:08.834525 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:08.847809 containerd[1664]: time="2025-12-12T18:29:08.847408852Z" level=info msg="connecting to shim a7876bc8c0267a0dacdbd744bbe5b67b60683f85e8b46ef6e015c48d160fa97a" address="unix:///run/containerd/s/aa7faf6a9d034be055d5993e594b780b399212432b3adbb9cb3d5662289e0e63" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:29:08.904432 systemd[1]: Started cri-containerd-a7876bc8c0267a0dacdbd744bbe5b67b60683f85e8b46ef6e015c48d160fa97a.scope - libcontainer container a7876bc8c0267a0dacdbd744bbe5b67b60683f85e8b46ef6e015c48d160fa97a. Dec 12 18:29:08.998000 audit: BPF prog-id=160 op=LOAD Dec 12 18:29:09.000000 audit: BPF prog-id=161 op=LOAD Dec 12 18:29:09.000000 audit[3461]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3449 pid=3461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:09.000000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137383736626338633032363761306461636462643734346262653562 Dec 12 18:29:09.001000 audit: BPF prog-id=161 op=UNLOAD Dec 12 18:29:09.001000 audit[3461]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3449 pid=3461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:09.001000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137383736626338633032363761306461636462643734346262653562 Dec 12 18:29:09.003000 audit: BPF prog-id=162 op=LOAD Dec 12 18:29:09.003000 audit[3461]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3449 pid=3461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:09.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137383736626338633032363761306461636462643734346262653562 Dec 12 18:29:09.003000 audit: BPF prog-id=163 op=LOAD Dec 12 18:29:09.003000 audit[3461]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3449 pid=3461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:09.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137383736626338633032363761306461636462643734346262653562 Dec 12 18:29:09.003000 audit: BPF prog-id=163 op=UNLOAD Dec 12 18:29:09.003000 audit[3461]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3449 pid=3461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:09.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137383736626338633032363761306461636462643734346262653562 Dec 12 18:29:09.003000 audit: BPF prog-id=162 op=UNLOAD Dec 12 18:29:09.003000 audit[3461]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3449 pid=3461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:09.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137383736626338633032363761306461636462643734346262653562 Dec 12 18:29:09.003000 audit: BPF prog-id=164 op=LOAD Dec 12 18:29:09.003000 audit[3461]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3449 pid=3461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:09.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137383736626338633032363761306461636462643734346262653562 Dec 12 18:29:09.016737 containerd[1664]: time="2025-12-12T18:29:09.016690998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-758d84bbf7-96c4g,Uid:69a04df4-69f9-4348-8e5f-051c55bcb954,Namespace:calico-system,Attempt:0,} returns sandbox id \"b7a505169b0f14f7d7f17e61e711e226165868ef788da5c0791be9062fc5d079\"" Dec 12 18:29:09.023623 containerd[1664]: time="2025-12-12T18:29:09.023512295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 18:29:09.065242 containerd[1664]: time="2025-12-12T18:29:09.065060190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7htd9,Uid:9806054c-9ee6-4d2d-9bc9-ed9417b7a616,Namespace:calico-system,Attempt:0,} returns sandbox id \"a7876bc8c0267a0dacdbd744bbe5b67b60683f85e8b46ef6e015c48d160fa97a\"" Dec 12 18:29:09.327000 audit[3496]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3496 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:09.327000 audit[3496]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe790ea300 a2=0 a3=7ffe790ea2ec items=0 ppid=3085 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:09.327000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:09.334000 audit[3496]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3496 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:09.334000 audit[3496]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe790ea300 a2=0 a3=0 items=0 ppid=3085 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:09.334000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:10.510435 kubelet[2966]: E1212 18:29:10.510293 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzv84" podUID="30dcceea-b67a-4ecb-b6c6-16baeb5ae67c" Dec 12 18:29:10.616890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount898241478.mount: Deactivated successfully. Dec 12 18:29:12.496784 kubelet[2966]: E1212 18:29:12.495672 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzv84" podUID="30dcceea-b67a-4ecb-b6c6-16baeb5ae67c" Dec 12 18:29:13.098184 containerd[1664]: time="2025-12-12T18:29:13.098070029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:29:13.099920 containerd[1664]: time="2025-12-12T18:29:13.099869966Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Dec 12 18:29:13.101180 containerd[1664]: time="2025-12-12T18:29:13.101110147Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:29:13.108710 containerd[1664]: time="2025-12-12T18:29:13.107493605Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:29:13.108710 containerd[1664]: time="2025-12-12T18:29:13.108527831Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 4.084924334s" Dec 12 18:29:13.108710 containerd[1664]: time="2025-12-12T18:29:13.108571089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 12 18:29:13.110459 containerd[1664]: time="2025-12-12T18:29:13.110418693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 18:29:13.135310 containerd[1664]: time="2025-12-12T18:29:13.135245526Z" level=info msg="CreateContainer within sandbox \"b7a505169b0f14f7d7f17e61e711e226165868ef788da5c0791be9062fc5d079\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 18:29:13.158651 containerd[1664]: time="2025-12-12T18:29:13.155087046Z" level=info msg="Container 13273a469a1ed73ab4f7afffba86d5d5781f1bd3908e02051918d34c065d2552: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:29:13.162722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4167487774.mount: Deactivated successfully. Dec 12 18:29:13.172029 containerd[1664]: time="2025-12-12T18:29:13.171896618Z" level=info msg="CreateContainer within sandbox \"b7a505169b0f14f7d7f17e61e711e226165868ef788da5c0791be9062fc5d079\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"13273a469a1ed73ab4f7afffba86d5d5781f1bd3908e02051918d34c065d2552\"" Dec 12 18:29:13.173192 containerd[1664]: time="2025-12-12T18:29:13.173135409Z" level=info msg="StartContainer for \"13273a469a1ed73ab4f7afffba86d5d5781f1bd3908e02051918d34c065d2552\"" Dec 12 18:29:13.175825 containerd[1664]: time="2025-12-12T18:29:13.175783595Z" level=info msg="connecting to shim 13273a469a1ed73ab4f7afffba86d5d5781f1bd3908e02051918d34c065d2552" address="unix:///run/containerd/s/8e273affb75e86d480da0a5c29e6eaa53187ff69cfbd2e8ca78c1fb9b5c9b19d" protocol=ttrpc version=3 Dec 12 18:29:13.242576 systemd[1]: Started cri-containerd-13273a469a1ed73ab4f7afffba86d5d5781f1bd3908e02051918d34c065d2552.scope - libcontainer container 13273a469a1ed73ab4f7afffba86d5d5781f1bd3908e02051918d34c065d2552. Dec 12 18:29:13.277000 audit: BPF prog-id=165 op=LOAD Dec 12 18:29:13.285452 kernel: kauditd_printk_skb: 52 callbacks suppressed Dec 12 18:29:13.285590 kernel: audit: type=1334 audit(1765564153.277:553): prog-id=165 op=LOAD Dec 12 18:29:13.284000 audit: BPF prog-id=166 op=LOAD Dec 12 18:29:13.284000 audit[3507]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3380 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:13.290057 kernel: audit: type=1334 audit(1765564153.284:554): prog-id=166 op=LOAD Dec 12 18:29:13.290231 kernel: audit: type=1300 audit(1765564153.284:554): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3380 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:13.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133323733613436396131656437336162346637616666666261383664 Dec 12 18:29:13.300212 kernel: audit: type=1327 audit(1765564153.284:554): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133323733613436396131656437336162346637616666666261383664 Dec 12 18:29:13.284000 audit: BPF prog-id=166 op=UNLOAD Dec 12 18:29:13.284000 audit[3507]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3380 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:13.305441 kernel: audit: type=1334 audit(1765564153.284:555): prog-id=166 op=UNLOAD Dec 12 18:29:13.305657 kernel: audit: type=1300 audit(1765564153.284:555): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3380 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:13.318466 kernel: audit: type=1327 audit(1765564153.284:555): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133323733613436396131656437336162346637616666666261383664 Dec 12 18:29:13.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133323733613436396131656437336162346637616666666261383664 Dec 12 18:29:13.284000 audit: BPF prog-id=167 op=LOAD Dec 12 18:29:13.326636 kernel: audit: type=1334 audit(1765564153.284:556): prog-id=167 op=LOAD Dec 12 18:29:13.326735 kernel: audit: type=1300 audit(1765564153.284:556): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3380 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:13.284000 audit[3507]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3380 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:13.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133323733613436396131656437336162346637616666666261383664 Dec 12 18:29:13.284000 audit: BPF prog-id=168 op=LOAD Dec 12 18:29:13.284000 audit[3507]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3380 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:13.333352 kernel: audit: type=1327 audit(1765564153.284:556): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133323733613436396131656437336162346637616666666261383664 Dec 12 18:29:13.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133323733613436396131656437336162346637616666666261383664 Dec 12 18:29:13.284000 audit: BPF prog-id=168 op=UNLOAD Dec 12 18:29:13.284000 audit[3507]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3380 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:13.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133323733613436396131656437336162346637616666666261383664 Dec 12 18:29:13.285000 audit: BPF prog-id=167 op=UNLOAD Dec 12 18:29:13.285000 audit[3507]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3380 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:13.285000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133323733613436396131656437336162346637616666666261383664 Dec 12 18:29:13.285000 audit: BPF prog-id=169 op=LOAD Dec 12 18:29:13.285000 audit[3507]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3380 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:13.285000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133323733613436396131656437336162346637616666666261383664 Dec 12 18:29:13.423682 containerd[1664]: time="2025-12-12T18:29:13.423515962Z" level=info msg="StartContainer for \"13273a469a1ed73ab4f7afffba86d5d5781f1bd3908e02051918d34c065d2552\" returns successfully" Dec 12 18:29:13.739242 kubelet[2966]: E1212 18:29:13.738426 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:13.739242 kubelet[2966]: W1212 18:29:13.738489 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:13.748237 kubelet[2966]: E1212 18:29:13.748034 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:13.749653 kubelet[2966]: E1212 18:29:13.749361 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:13.749653 kubelet[2966]: W1212 18:29:13.749448 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:13.749653 kubelet[2966]: E1212 18:29:13.749490 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:13.752195 kubelet[2966]: E1212 18:29:13.750557 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:13.752195 kubelet[2966]: W1212 18:29:13.750583 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:13.752195 kubelet[2966]: E1212 18:29:13.750609 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:13.752195 kubelet[2966]: E1212 18:29:13.750957 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:13.752195 kubelet[2966]: W1212 18:29:13.750973 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:13.752195 kubelet[2966]: E1212 18:29:13.750996 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:13.753912 kubelet[2966]: E1212 18:29:13.753876 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:13.754010 kubelet[2966]: W1212 18:29:13.753909 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:13.754010 kubelet[2966]: E1212 18:29:13.753941 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:13.754249 kubelet[2966]: E1212 18:29:13.754220 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:13.754249 kubelet[2966]: W1212 18:29:13.754242 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:13.754385 kubelet[2966]: E1212 18:29:13.754267 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:13.755194 kubelet[2966]: E1212 18:29:13.754484 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:13.755194 kubelet[2966]: W1212 18:29:13.754505 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:13.755194 kubelet[2966]: E1212 18:29:13.754528 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:13.755194 kubelet[2966]: E1212 18:29:13.754763 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:13.755194 kubelet[2966]: W1212 18:29:13.754783 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:13.755194 kubelet[2966]: E1212 18:29:13.754798 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:13.755194 kubelet[2966]: E1212 18:29:13.755056 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:13.755194 kubelet[2966]: W1212 18:29:13.755070 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:13.755194 kubelet[2966]: E1212 18:29:13.755085 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:13.755783 kubelet[2966]: E1212 18:29:13.755325 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:13.755783 kubelet[2966]: W1212 18:29:13.755339 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:13.755783 kubelet[2966]: E1212 18:29:13.755359 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:13.755783 kubelet[2966]: E1212 18:29:13.755560 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:13.755783 kubelet[2966]: W1212 18:29:13.755573 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:13.755783 kubelet[2966]: E1212 18:29:13.755592 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:13.756031 kubelet[2966]: E1212 18:29:13.755833 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:13.756031 kubelet[2966]: W1212 18:29:13.755847 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:13.756031 kubelet[2966]: E1212 18:29:13.755893 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:13.760189 kubelet[2966]: E1212 18:29:13.758387 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:13.760189 kubelet[2966]: W1212 18:29:13.758425 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:13.760189 kubelet[2966]: E1212 18:29:13.758450 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:13.760189 kubelet[2966]: E1212 18:29:13.758727 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:13.760189 kubelet[2966]: W1212 18:29:13.758742 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:13.760189 kubelet[2966]: E1212 18:29:13.758757 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:13.760189 kubelet[2966]: E1212 18:29:13.759013 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:13.760189 kubelet[2966]: W1212 18:29:13.759029 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:13.760189 kubelet[2966]: E1212 18:29:13.759045 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:13.798676 kubelet[2966]: E1212 18:29:13.798624 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:13.798970 kubelet[2966]: W1212 18:29:13.798940 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:13.799114 kubelet[2966]: E1212 18:29:13.799086 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:13.799531 kubelet[2966]: E1212 18:29:13.799510 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:13.799679 kubelet[2966]: W1212 18:29:13.799655 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:13.799807 kubelet[2966]: E1212 18:29:13.799785 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:13.800618 kubelet[2966]: E1212 18:29:13.800447 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:13.800618 kubelet[2966]: W1212 18:29:13.800475 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:13.800618 kubelet[2966]: E1212 18:29:13.800497 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:13.801465 kubelet[2966]: E1212 18:29:13.801436 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:13.801602 kubelet[2966]: W1212 18:29:13.801578 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:13.802188 kubelet[2966]: E1212 18:29:13.801935 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:13.802188 kubelet[2966]: W1212 18:29:13.801954 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:13.802984 kubelet[2966]: E1212 18:29:13.802944 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:13.803055 kubelet[2966]: E1212 18:29:13.802985 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:13.803334 kubelet[2966]: E1212 18:29:13.803312 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:13.803452 kubelet[2966]: W1212 18:29:13.803429 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:13.803580 kubelet[2966]: E1212 18:29:13.803558 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:13.803967 kubelet[2966]: E1212 18:29:13.803948 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:13.804142 kubelet[2966]: W1212 18:29:13.804065 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:13.804142 kubelet[2966]: E1212 18:29:13.804100 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:13.804428 kubelet[2966]: E1212 18:29:13.804393 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:13.804428 kubelet[2966]: W1212 18:29:13.804417 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:13.804764 kubelet[2966]: E1212 18:29:13.804455 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:13.804764 kubelet[2966]: E1212 18:29:13.804707 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:13.804764 kubelet[2966]: W1212 18:29:13.804722 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:13.804764 kubelet[2966]: E1212 18:29:13.804746 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:13.805266 kubelet[2966]: E1212 18:29:13.804967 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:13.805266 kubelet[2966]: W1212 18:29:13.804987 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:13.805266 kubelet[2966]: E1212 18:29:13.805003 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:13.805858 kubelet[2966]: E1212 18:29:13.805828 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:13.806224 kubelet[2966]: W1212 18:29:13.805848 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:13.806791 kubelet[2966]: E1212 18:29:13.806305 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:13.806791 kubelet[2966]: E1212 18:29:13.806676 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:13.806791 kubelet[2966]: W1212 18:29:13.806694 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:13.807488 kubelet[2966]: E1212 18:29:13.807463 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:13.807488 kubelet[2966]: W1212 18:29:13.807485 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:13.809195 kubelet[2966]: E1212 18:29:13.808234 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:13.809195 kubelet[2966]: E1212 18:29:13.808264 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:13.809434 kubelet[2966]: E1212 18:29:13.809411 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:13.809434 kubelet[2966]: W1212 18:29:13.809432 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:13.810332 kubelet[2966]: E1212 18:29:13.809457 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:13.810414 kubelet[2966]: E1212 18:29:13.810351 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:13.810414 kubelet[2966]: W1212 18:29:13.810366 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:13.810414 kubelet[2966]: E1212 18:29:13.810383 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:13.810695 kubelet[2966]: E1212 18:29:13.810674 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:13.810695 kubelet[2966]: W1212 18:29:13.810693 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:13.810901 kubelet[2966]: E1212 18:29:13.810771 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:13.811175 kubelet[2966]: E1212 18:29:13.811135 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:13.811265 kubelet[2966]: W1212 18:29:13.811221 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:13.811265 kubelet[2966]: E1212 18:29:13.811243 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:13.812275 kubelet[2966]: E1212 18:29:13.811979 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:13.812275 kubelet[2966]: W1212 18:29:13.812206 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:13.812275 kubelet[2966]: E1212 18:29:13.812236 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:14.497840 kubelet[2966]: E1212 18:29:14.497275 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzv84" podUID="30dcceea-b67a-4ecb-b6c6-16baeb5ae67c" Dec 12 18:29:14.680567 kubelet[2966]: I1212 18:29:14.680508 2966 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 18:29:14.710000 containerd[1664]: time="2025-12-12T18:29:14.709733468Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:29:14.711840 containerd[1664]: time="2025-12-12T18:29:14.711617236Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 12 18:29:14.713077 containerd[1664]: time="2025-12-12T18:29:14.713030966Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:29:14.715896 containerd[1664]: time="2025-12-12T18:29:14.715842815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:29:14.717091 containerd[1664]: time="2025-12-12T18:29:14.717024839Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.606562528s" Dec 12 18:29:14.717197 containerd[1664]: time="2025-12-12T18:29:14.717116536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 12 18:29:14.721415 containerd[1664]: time="2025-12-12T18:29:14.721206600Z" level=info msg="CreateContainer within sandbox \"a7876bc8c0267a0dacdbd744bbe5b67b60683f85e8b46ef6e015c48d160fa97a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 18:29:14.739199 containerd[1664]: time="2025-12-12T18:29:14.738639547Z" level=info msg="Container c9d9f3b264cdada39a641294aaf9b3fca99ad8445cd0dd4087a8b60d605fe590: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:29:14.746476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4290186644.mount: Deactivated successfully. Dec 12 18:29:14.753290 containerd[1664]: time="2025-12-12T18:29:14.753112665Z" level=info msg="CreateContainer within sandbox \"a7876bc8c0267a0dacdbd744bbe5b67b60683f85e8b46ef6e015c48d160fa97a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c9d9f3b264cdada39a641294aaf9b3fca99ad8445cd0dd4087a8b60d605fe590\"" Dec 12 18:29:14.760752 containerd[1664]: time="2025-12-12T18:29:14.760634914Z" level=info msg="StartContainer for \"c9d9f3b264cdada39a641294aaf9b3fca99ad8445cd0dd4087a8b60d605fe590\"" Dec 12 18:29:14.763725 containerd[1664]: time="2025-12-12T18:29:14.763648800Z" level=info msg="connecting to shim c9d9f3b264cdada39a641294aaf9b3fca99ad8445cd0dd4087a8b60d605fe590" address="unix:///run/containerd/s/aa7faf6a9d034be055d5993e594b780b399212432b3adbb9cb3d5662289e0e63" protocol=ttrpc version=3 Dec 12 18:29:14.777027 kubelet[2966]: E1212 18:29:14.776271 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:14.777027 kubelet[2966]: W1212 18:29:14.776318 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:14.777027 kubelet[2966]: E1212 18:29:14.776362 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:14.779066 kubelet[2966]: E1212 18:29:14.779036 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:14.779310 kubelet[2966]: W1212 18:29:14.779066 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:14.779310 kubelet[2966]: E1212 18:29:14.779123 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:14.779651 kubelet[2966]: E1212 18:29:14.779536 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:14.779651 kubelet[2966]: W1212 18:29:14.779551 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:14.779651 kubelet[2966]: E1212 18:29:14.779567 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:14.779987 kubelet[2966]: E1212 18:29:14.779858 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:14.779987 kubelet[2966]: W1212 18:29:14.779880 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:14.779987 kubelet[2966]: E1212 18:29:14.779969 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:14.780444 kubelet[2966]: E1212 18:29:14.780368 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:14.780528 kubelet[2966]: W1212 18:29:14.780450 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:14.780528 kubelet[2966]: E1212 18:29:14.780470 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:14.781824 kubelet[2966]: E1212 18:29:14.781798 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:14.781824 kubelet[2966]: W1212 18:29:14.781825 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:14.781972 kubelet[2966]: E1212 18:29:14.781844 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:14.783296 kubelet[2966]: E1212 18:29:14.783270 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:14.783296 kubelet[2966]: W1212 18:29:14.783293 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:14.783440 kubelet[2966]: E1212 18:29:14.783310 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:14.784582 kubelet[2966]: E1212 18:29:14.784547 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:14.784582 kubelet[2966]: W1212 18:29:14.784572 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:14.784826 kubelet[2966]: E1212 18:29:14.784590 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:14.785079 kubelet[2966]: E1212 18:29:14.784885 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:14.785079 kubelet[2966]: W1212 18:29:14.784906 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:14.785079 kubelet[2966]: E1212 18:29:14.784921 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:14.786068 kubelet[2966]: E1212 18:29:14.785999 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:14.786068 kubelet[2966]: W1212 18:29:14.786022 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:14.786068 kubelet[2966]: E1212 18:29:14.786041 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:14.786656 kubelet[2966]: E1212 18:29:14.786548 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:14.786656 kubelet[2966]: W1212 18:29:14.786570 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:14.786656 kubelet[2966]: E1212 18:29:14.786640 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:14.787692 kubelet[2966]: E1212 18:29:14.787663 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:14.787692 kubelet[2966]: W1212 18:29:14.787685 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:14.787835 kubelet[2966]: E1212 18:29:14.787702 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:14.788265 kubelet[2966]: E1212 18:29:14.787972 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:14.788265 kubelet[2966]: W1212 18:29:14.787992 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:14.788265 kubelet[2966]: E1212 18:29:14.788009 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:14.788894 kubelet[2966]: E1212 18:29:14.788294 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:14.788894 kubelet[2966]: W1212 18:29:14.788307 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:14.788894 kubelet[2966]: E1212 18:29:14.788322 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:14.789831 kubelet[2966]: E1212 18:29:14.789296 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:14.789831 kubelet[2966]: W1212 18:29:14.789311 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:14.789831 kubelet[2966]: E1212 18:29:14.789327 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:14.814337 kubelet[2966]: E1212 18:29:14.814289 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:14.814842 kubelet[2966]: W1212 18:29:14.814601 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:14.814842 kubelet[2966]: E1212 18:29:14.814667 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:14.815958 kubelet[2966]: E1212 18:29:14.815853 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:14.815958 kubelet[2966]: W1212 18:29:14.815904 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:14.815958 kubelet[2966]: E1212 18:29:14.815922 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:14.817371 kubelet[2966]: E1212 18:29:14.817315 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:14.817371 kubelet[2966]: W1212 18:29:14.817336 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:14.817598 kubelet[2966]: E1212 18:29:14.817519 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:14.819190 kubelet[2966]: E1212 18:29:14.818496 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:14.819190 kubelet[2966]: W1212 18:29:14.818515 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:14.819190 kubelet[2966]: E1212 18:29:14.819144 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:14.818601 systemd[1]: Started cri-containerd-c9d9f3b264cdada39a641294aaf9b3fca99ad8445cd0dd4087a8b60d605fe590.scope - libcontainer container c9d9f3b264cdada39a641294aaf9b3fca99ad8445cd0dd4087a8b60d605fe590. Dec 12 18:29:14.820303 kubelet[2966]: E1212 18:29:14.820054 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:14.820303 kubelet[2966]: W1212 18:29:14.820074 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:14.820303 kubelet[2966]: E1212 18:29:14.820127 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:14.821655 kubelet[2966]: E1212 18:29:14.821444 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:14.821655 kubelet[2966]: W1212 18:29:14.821465 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:14.821655 kubelet[2966]: E1212 18:29:14.821589 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:14.822349 kubelet[2966]: E1212 18:29:14.822322 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:14.822349 kubelet[2966]: W1212 18:29:14.822344 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:14.822534 kubelet[2966]: E1212 18:29:14.822496 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:14.824002 kubelet[2966]: E1212 18:29:14.823502 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:14.824002 kubelet[2966]: W1212 18:29:14.823525 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:14.824002 kubelet[2966]: E1212 18:29:14.823641 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:14.824150 kubelet[2966]: E1212 18:29:14.824060 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:14.824150 kubelet[2966]: W1212 18:29:14.824076 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:14.824314 kubelet[2966]: E1212 18:29:14.824261 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:14.824835 kubelet[2966]: E1212 18:29:14.824803 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:14.824835 kubelet[2966]: W1212 18:29:14.824823 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:14.825904 kubelet[2966]: E1212 18:29:14.825834 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:14.826239 kubelet[2966]: E1212 18:29:14.826204 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:14.826239 kubelet[2966]: W1212 18:29:14.826239 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:14.826397 kubelet[2966]: E1212 18:29:14.826372 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:14.826730 kubelet[2966]: E1212 18:29:14.826697 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:14.826730 kubelet[2966]: W1212 18:29:14.826721 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:14.826865 kubelet[2966]: E1212 18:29:14.826840 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:14.827346 kubelet[2966]: E1212 18:29:14.827303 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:14.827346 kubelet[2966]: W1212 18:29:14.827341 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:14.827480 kubelet[2966]: E1212 18:29:14.827416 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:14.828662 kubelet[2966]: E1212 18:29:14.828594 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:14.828662 kubelet[2966]: W1212 18:29:14.828616 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:14.828662 kubelet[2966]: E1212 18:29:14.828634 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:14.830644 kubelet[2966]: E1212 18:29:14.830608 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:14.830790 kubelet[2966]: W1212 18:29:14.830633 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:14.830790 kubelet[2966]: E1212 18:29:14.830694 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:14.831598 kubelet[2966]: E1212 18:29:14.831576 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:14.831687 kubelet[2966]: W1212 18:29:14.831628 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:14.831687 kubelet[2966]: E1212 18:29:14.831667 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:14.832399 kubelet[2966]: E1212 18:29:14.832375 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:14.832485 kubelet[2966]: W1212 18:29:14.832402 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:14.832485 kubelet[2966]: E1212 18:29:14.832421 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:14.833966 kubelet[2966]: E1212 18:29:14.833874 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:29:14.833966 kubelet[2966]: W1212 18:29:14.833898 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:29:14.833966 kubelet[2966]: E1212 18:29:14.833917 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:29:14.934000 audit: BPF prog-id=170 op=LOAD Dec 12 18:29:14.934000 audit[3584]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3449 pid=3584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:14.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339643966336232363463646164613339613634313239346161663962 Dec 12 18:29:14.935000 audit: BPF prog-id=171 op=LOAD Dec 12 18:29:14.935000 audit[3584]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3449 pid=3584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:14.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339643966336232363463646164613339613634313239346161663962 Dec 12 18:29:14.935000 audit: BPF prog-id=171 op=UNLOAD Dec 12 18:29:14.935000 audit[3584]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3449 pid=3584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:14.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339643966336232363463646164613339613634313239346161663962 Dec 12 18:29:14.935000 audit: BPF prog-id=170 op=UNLOAD Dec 12 18:29:14.935000 audit[3584]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3449 pid=3584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:14.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339643966336232363463646164613339613634313239346161663962 Dec 12 18:29:14.935000 audit: BPF prog-id=172 op=LOAD Dec 12 18:29:14.935000 audit[3584]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3449 pid=3584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:14.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339643966336232363463646164613339613634313239346161663962 Dec 12 18:29:15.010575 containerd[1664]: time="2025-12-12T18:29:15.010410491Z" level=info msg="StartContainer for \"c9d9f3b264cdada39a641294aaf9b3fca99ad8445cd0dd4087a8b60d605fe590\" returns successfully" Dec 12 18:29:15.017829 systemd[1]: cri-containerd-c9d9f3b264cdada39a641294aaf9b3fca99ad8445cd0dd4087a8b60d605fe590.scope: Deactivated successfully. Dec 12 18:29:15.021000 audit: BPF prog-id=172 op=UNLOAD Dec 12 18:29:15.060207 containerd[1664]: time="2025-12-12T18:29:15.060008812Z" level=info msg="received container exit event container_id:\"c9d9f3b264cdada39a641294aaf9b3fca99ad8445cd0dd4087a8b60d605fe590\" id:\"c9d9f3b264cdada39a641294aaf9b3fca99ad8445cd0dd4087a8b60d605fe590\" pid:3621 exited_at:{seconds:1765564155 nanos:24676788}" Dec 12 18:29:15.135439 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9d9f3b264cdada39a641294aaf9b3fca99ad8445cd0dd4087a8b60d605fe590-rootfs.mount: Deactivated successfully. Dec 12 18:29:15.688715 containerd[1664]: time="2025-12-12T18:29:15.688625562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 18:29:15.720188 kubelet[2966]: I1212 18:29:15.712525 2966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-758d84bbf7-96c4g" podStartSLOduration=3.624116475 podStartE2EDuration="7.712466604s" podCreationTimestamp="2025-12-12 18:29:08 +0000 UTC" firstStartedPulling="2025-12-12 18:29:09.021667528 +0000 UTC m=+24.753485235" lastFinishedPulling="2025-12-12 18:29:13.110017651 +0000 UTC m=+28.841835364" observedRunningTime="2025-12-12 18:29:13.721635014 +0000 UTC m=+29.453452747" watchObservedRunningTime="2025-12-12 18:29:15.712466604 +0000 UTC m=+31.444284317" Dec 12 18:29:16.496407 kubelet[2966]: E1212 18:29:16.496331 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzv84" podUID="30dcceea-b67a-4ecb-b6c6-16baeb5ae67c" Dec 12 18:29:18.499972 kubelet[2966]: E1212 18:29:18.497953 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzv84" podUID="30dcceea-b67a-4ecb-b6c6-16baeb5ae67c" Dec 12 18:29:20.495125 kubelet[2966]: E1212 18:29:20.495070 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzv84" podUID="30dcceea-b67a-4ecb-b6c6-16baeb5ae67c" Dec 12 18:29:21.680626 containerd[1664]: time="2025-12-12T18:29:21.679495742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:29:21.680626 containerd[1664]: time="2025-12-12T18:29:21.680571055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Dec 12 18:29:21.681560 containerd[1664]: time="2025-12-12T18:29:21.681512520Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:29:21.683821 containerd[1664]: time="2025-12-12T18:29:21.683769676Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:29:21.685059 containerd[1664]: time="2025-12-12T18:29:21.685025389Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 5.996310998s" Dec 12 18:29:21.685276 containerd[1664]: time="2025-12-12T18:29:21.685227325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 12 18:29:21.708377 containerd[1664]: time="2025-12-12T18:29:21.707649722Z" level=info msg="CreateContainer within sandbox \"a7876bc8c0267a0dacdbd744bbe5b67b60683f85e8b46ef6e015c48d160fa97a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 18:29:21.723076 containerd[1664]: time="2025-12-12T18:29:21.721324820Z" level=info msg="Container a327db3e4bae7bb5a15d3ff87b1569f97b79ce36d7a7c86234d1c3f0d302ede2: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:29:21.735490 containerd[1664]: time="2025-12-12T18:29:21.735348681Z" level=info msg="CreateContainer within sandbox \"a7876bc8c0267a0dacdbd744bbe5b67b60683f85e8b46ef6e015c48d160fa97a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a327db3e4bae7bb5a15d3ff87b1569f97b79ce36d7a7c86234d1c3f0d302ede2\"" Dec 12 18:29:21.736480 containerd[1664]: time="2025-12-12T18:29:21.736431153Z" level=info msg="StartContainer for \"a327db3e4bae7bb5a15d3ff87b1569f97b79ce36d7a7c86234d1c3f0d302ede2\"" Dec 12 18:29:21.740927 containerd[1664]: time="2025-12-12T18:29:21.740881967Z" level=info msg="connecting to shim a327db3e4bae7bb5a15d3ff87b1569f97b79ce36d7a7c86234d1c3f0d302ede2" address="unix:///run/containerd/s/aa7faf6a9d034be055d5993e594b780b399212432b3adbb9cb3d5662289e0e63" protocol=ttrpc version=3 Dec 12 18:29:21.789484 systemd[1]: Started cri-containerd-a327db3e4bae7bb5a15d3ff87b1569f97b79ce36d7a7c86234d1c3f0d302ede2.scope - libcontainer container a327db3e4bae7bb5a15d3ff87b1569f97b79ce36d7a7c86234d1c3f0d302ede2. Dec 12 18:29:21.866000 audit: BPF prog-id=173 op=LOAD Dec 12 18:29:21.872468 kernel: kauditd_printk_skb: 28 callbacks suppressed Dec 12 18:29:21.872594 kernel: audit: type=1334 audit(1765564161.866:567): prog-id=173 op=LOAD Dec 12 18:29:21.866000 audit[3678]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3449 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:21.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133323764623365346261653762623561313564336666383762313536 Dec 12 18:29:21.882907 kernel: audit: type=1300 audit(1765564161.866:567): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3449 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:21.882994 kernel: audit: type=1327 audit(1765564161.866:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133323764623365346261653762623561313564336666383762313536 Dec 12 18:29:21.866000 audit: BPF prog-id=174 op=LOAD Dec 12 18:29:21.889987 kernel: audit: type=1334 audit(1765564161.866:568): prog-id=174 op=LOAD Dec 12 18:29:21.890057 kernel: audit: type=1300 audit(1765564161.866:568): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3449 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:21.866000 audit[3678]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3449 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:21.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133323764623365346261653762623561313564336666383762313536 Dec 12 18:29:21.895719 kernel: audit: type=1327 audit(1765564161.866:568): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133323764623365346261653762623561313564336666383762313536 Dec 12 18:29:21.866000 audit: BPF prog-id=174 op=UNLOAD Dec 12 18:29:21.866000 audit[3678]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3449 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:21.902923 kernel: audit: type=1334 audit(1765564161.866:569): prog-id=174 op=UNLOAD Dec 12 18:29:21.903004 kernel: audit: type=1300 audit(1765564161.866:569): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3449 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:21.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133323764623365346261653762623561313564336666383762313536 Dec 12 18:29:21.912194 kernel: audit: type=1327 audit(1765564161.866:569): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133323764623365346261653762623561313564336666383762313536 Dec 12 18:29:21.866000 audit: BPF prog-id=173 op=UNLOAD Dec 12 18:29:21.866000 audit[3678]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3449 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:21.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133323764623365346261653762623561313564336666383762313536 Dec 12 18:29:21.866000 audit: BPF prog-id=175 op=LOAD Dec 12 18:29:21.866000 audit[3678]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3449 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:21.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133323764623365346261653762623561313564336666383762313536 Dec 12 18:29:21.917991 kernel: audit: type=1334 audit(1765564161.866:570): prog-id=173 op=UNLOAD Dec 12 18:29:21.998542 containerd[1664]: time="2025-12-12T18:29:21.998375696Z" level=info msg="StartContainer for \"a327db3e4bae7bb5a15d3ff87b1569f97b79ce36d7a7c86234d1c3f0d302ede2\" returns successfully" Dec 12 18:29:22.513621 kubelet[2966]: E1212 18:29:22.513468 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzv84" podUID="30dcceea-b67a-4ecb-b6c6-16baeb5ae67c" Dec 12 18:29:23.128975 systemd[1]: cri-containerd-a327db3e4bae7bb5a15d3ff87b1569f97b79ce36d7a7c86234d1c3f0d302ede2.scope: Deactivated successfully. Dec 12 18:29:23.129572 systemd[1]: cri-containerd-a327db3e4bae7bb5a15d3ff87b1569f97b79ce36d7a7c86234d1c3f0d302ede2.scope: Consumed 860ms CPU time, 160.6M memory peak, 5.6M read from disk, 171.3M written to disk. Dec 12 18:29:23.134000 audit: BPF prog-id=175 op=UNLOAD Dec 12 18:29:23.147212 containerd[1664]: time="2025-12-12T18:29:23.146209850Z" level=info msg="received container exit event container_id:\"a327db3e4bae7bb5a15d3ff87b1569f97b79ce36d7a7c86234d1c3f0d302ede2\" id:\"a327db3e4bae7bb5a15d3ff87b1569f97b79ce36d7a7c86234d1c3f0d302ede2\" pid:3691 exited_at:{seconds:1765564163 nanos:133543656}" Dec 12 18:29:23.222961 kubelet[2966]: I1212 18:29:23.222585 2966 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 18:29:23.224463 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a327db3e4bae7bb5a15d3ff87b1569f97b79ce36d7a7c86234d1c3f0d302ede2-rootfs.mount: Deactivated successfully. Dec 12 18:29:23.306917 systemd[1]: Created slice kubepods-burstable-pod6315f58f_593a_4458_a6a7_81537b517426.slice - libcontainer container kubepods-burstable-pod6315f58f_593a_4458_a6a7_81537b517426.slice. Dec 12 18:29:23.313621 kubelet[2966]: W1212 18:29:23.313569 2966 reflector.go:569] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:srv-vv1nl.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'srv-vv1nl.gb1.brightbox.com' and this object Dec 12 18:29:23.316521 kubelet[2966]: W1212 18:29:23.316473 2966 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:srv-vv1nl.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'srv-vv1nl.gb1.brightbox.com' and this object Dec 12 18:29:23.318123 kubelet[2966]: E1212 18:29:23.318087 2966 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:srv-vv1nl.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'srv-vv1nl.gb1.brightbox.com' and this object" logger="UnhandledError" Dec 12 18:29:23.319202 kubelet[2966]: E1212 18:29:23.318084 2966 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:srv-vv1nl.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'srv-vv1nl.gb1.brightbox.com' and this object" logger="UnhandledError" Dec 12 18:29:23.319202 kubelet[2966]: W1212 18:29:23.318195 2966 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:srv-vv1nl.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'srv-vv1nl.gb1.brightbox.com' and this object Dec 12 18:29:23.319449 kubelet[2966]: E1212 18:29:23.318313 2966 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:srv-vv1nl.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'srv-vv1nl.gb1.brightbox.com' and this object" logger="UnhandledError" Dec 12 18:29:23.324374 kubelet[2966]: W1212 18:29:23.324302 2966 reflector.go:569] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:srv-vv1nl.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'srv-vv1nl.gb1.brightbox.com' and this object Dec 12 18:29:23.324584 kubelet[2966]: E1212 18:29:23.324351 2966 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:srv-vv1nl.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'srv-vv1nl.gb1.brightbox.com' and this object" logger="UnhandledError" Dec 12 18:29:23.326502 systemd[1]: Created slice kubepods-burstable-poda11bd055_cf72_461b_9c3e_d8c8d4421dd2.slice - libcontainer container kubepods-burstable-poda11bd055_cf72_461b_9c3e_d8c8d4421dd2.slice. Dec 12 18:29:23.345243 systemd[1]: Created slice kubepods-besteffort-pod1566f111_6981_4bf6_b05d_69ebc0c0ffaa.slice - libcontainer container kubepods-besteffort-pod1566f111_6981_4bf6_b05d_69ebc0c0ffaa.slice. Dec 12 18:29:23.363857 systemd[1]: Created slice kubepods-besteffort-podd20e72ff_82d7_435d_b1ff_5952de8d6823.slice - libcontainer container kubepods-besteffort-podd20e72ff_82d7_435d_b1ff_5952de8d6823.slice. Dec 12 18:29:23.380392 systemd[1]: Created slice kubepods-besteffort-poddcf3ef86_77cd_46ff_befa_f79857cd4570.slice - libcontainer container kubepods-besteffort-poddcf3ef86_77cd_46ff_befa_f79857cd4570.slice. Dec 12 18:29:23.395520 systemd[1]: Created slice kubepods-besteffort-pod1d148975_a25d_444b_b4af_f56c99e82a44.slice - libcontainer container kubepods-besteffort-pod1d148975_a25d_444b_b4af_f56c99e82a44.slice. Dec 12 18:29:23.396772 kubelet[2966]: I1212 18:29:23.396733 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d148975-a25d-444b-b4af-f56c99e82a44-tigera-ca-bundle\") pod \"calico-kube-controllers-84669789bf-xx9l4\" (UID: \"1d148975-a25d-444b-b4af-f56c99e82a44\") " pod="calico-system/calico-kube-controllers-84669789bf-xx9l4" Dec 12 18:29:23.396990 kubelet[2966]: I1212 18:29:23.396951 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1566f111-6981-4bf6-b05d-69ebc0c0ffaa-goldmane-ca-bundle\") pod \"goldmane-666569f655-cwqst\" (UID: \"1566f111-6981-4bf6-b05d-69ebc0c0ffaa\") " pod="calico-system/goldmane-666569f655-cwqst" Dec 12 18:29:23.397352 kubelet[2966]: I1212 18:29:23.397302 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b-whisker-backend-key-pair\") pod \"whisker-7d4f547dfc-vzz6v\" (UID: \"2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b\") " pod="calico-system/whisker-7d4f547dfc-vzz6v" Dec 12 18:29:23.397504 kubelet[2966]: I1212 18:29:23.397480 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1566f111-6981-4bf6-b05d-69ebc0c0ffaa-config\") pod \"goldmane-666569f655-cwqst\" (UID: \"1566f111-6981-4bf6-b05d-69ebc0c0ffaa\") " pod="calico-system/goldmane-666569f655-cwqst" Dec 12 18:29:23.397657 kubelet[2966]: I1212 18:29:23.397633 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d20e72ff-82d7-435d-b1ff-5952de8d6823-calico-apiserver-certs\") pod \"calico-apiserver-557c4cb5b5-4c5jw\" (UID: \"d20e72ff-82d7-435d-b1ff-5952de8d6823\") " pod="calico-apiserver/calico-apiserver-557c4cb5b5-4c5jw" Dec 12 18:29:23.397794 kubelet[2966]: I1212 18:29:23.397771 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhzkl\" (UniqueName: \"kubernetes.io/projected/2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b-kube-api-access-lhzkl\") pod \"whisker-7d4f547dfc-vzz6v\" (UID: \"2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b\") " pod="calico-system/whisker-7d4f547dfc-vzz6v" Dec 12 18:29:23.398047 kubelet[2966]: I1212 18:29:23.397970 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a11bd055-cf72-461b-9c3e-d8c8d4421dd2-config-volume\") pod \"coredns-668d6bf9bc-grwhm\" (UID: \"a11bd055-cf72-461b-9c3e-d8c8d4421dd2\") " pod="kube-system/coredns-668d6bf9bc-grwhm" Dec 12 18:29:23.398148 kubelet[2966]: I1212 18:29:23.398026 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqr7h\" (UniqueName: \"kubernetes.io/projected/d20e72ff-82d7-435d-b1ff-5952de8d6823-kube-api-access-jqr7h\") pod \"calico-apiserver-557c4cb5b5-4c5jw\" (UID: \"d20e72ff-82d7-435d-b1ff-5952de8d6823\") " pod="calico-apiserver/calico-apiserver-557c4cb5b5-4c5jw" Dec 12 18:29:23.398734 kubelet[2966]: I1212 18:29:23.398708 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw84z\" (UniqueName: \"kubernetes.io/projected/1566f111-6981-4bf6-b05d-69ebc0c0ffaa-kube-api-access-mw84z\") pod \"goldmane-666569f655-cwqst\" (UID: \"1566f111-6981-4bf6-b05d-69ebc0c0ffaa\") " pod="calico-system/goldmane-666569f655-cwqst" Dec 12 18:29:23.398865 kubelet[2966]: I1212 18:29:23.398841 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6315f58f-593a-4458-a6a7-81537b517426-config-volume\") pod \"coredns-668d6bf9bc-lsgz8\" (UID: \"6315f58f-593a-4458-a6a7-81537b517426\") " pod="kube-system/coredns-668d6bf9bc-lsgz8" Dec 12 18:29:23.399039 kubelet[2966]: I1212 18:29:23.399003 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dcf3ef86-77cd-46ff-befa-f79857cd4570-calico-apiserver-certs\") pod \"calico-apiserver-557c4cb5b5-v5k7t\" (UID: \"dcf3ef86-77cd-46ff-befa-f79857cd4570\") " pod="calico-apiserver/calico-apiserver-557c4cb5b5-v5k7t" Dec 12 18:29:23.399683 kubelet[2966]: I1212 18:29:23.399541 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pjtf\" (UniqueName: \"kubernetes.io/projected/1d148975-a25d-444b-b4af-f56c99e82a44-kube-api-access-9pjtf\") pod \"calico-kube-controllers-84669789bf-xx9l4\" (UID: \"1d148975-a25d-444b-b4af-f56c99e82a44\") " pod="calico-system/calico-kube-controllers-84669789bf-xx9l4" Dec 12 18:29:23.400354 kubelet[2966]: I1212 18:29:23.400194 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6szg9\" (UniqueName: \"kubernetes.io/projected/dcf3ef86-77cd-46ff-befa-f79857cd4570-kube-api-access-6szg9\") pod \"calico-apiserver-557c4cb5b5-v5k7t\" (UID: \"dcf3ef86-77cd-46ff-befa-f79857cd4570\") " pod="calico-apiserver/calico-apiserver-557c4cb5b5-v5k7t" Dec 12 18:29:23.400733 kubelet[2966]: I1212 18:29:23.400616 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1566f111-6981-4bf6-b05d-69ebc0c0ffaa-goldmane-key-pair\") pod \"goldmane-666569f655-cwqst\" (UID: \"1566f111-6981-4bf6-b05d-69ebc0c0ffaa\") " pod="calico-system/goldmane-666569f655-cwqst" Dec 12 18:29:23.403345 kubelet[2966]: I1212 18:29:23.402703 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b-whisker-ca-bundle\") pod \"whisker-7d4f547dfc-vzz6v\" (UID: \"2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b\") " pod="calico-system/whisker-7d4f547dfc-vzz6v" Dec 12 18:29:23.403345 kubelet[2966]: I1212 18:29:23.402846 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vr9s\" (UniqueName: \"kubernetes.io/projected/a11bd055-cf72-461b-9c3e-d8c8d4421dd2-kube-api-access-2vr9s\") pod \"coredns-668d6bf9bc-grwhm\" (UID: \"a11bd055-cf72-461b-9c3e-d8c8d4421dd2\") " pod="kube-system/coredns-668d6bf9bc-grwhm" Dec 12 18:29:23.403345 kubelet[2966]: I1212 18:29:23.402886 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4gcf\" (UniqueName: \"kubernetes.io/projected/6315f58f-593a-4458-a6a7-81537b517426-kube-api-access-v4gcf\") pod \"coredns-668d6bf9bc-lsgz8\" (UID: \"6315f58f-593a-4458-a6a7-81537b517426\") " pod="kube-system/coredns-668d6bf9bc-lsgz8" Dec 12 18:29:23.410848 systemd[1]: Created slice kubepods-besteffort-pod2bd61bfe_05dc_4b61_bc64_eee3d98d3f4b.slice - libcontainer container kubepods-besteffort-pod2bd61bfe_05dc_4b61_bc64_eee3d98d3f4b.slice. Dec 12 18:29:23.616946 containerd[1664]: time="2025-12-12T18:29:23.616890727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lsgz8,Uid:6315f58f-593a-4458-a6a7-81537b517426,Namespace:kube-system,Attempt:0,}" Dec 12 18:29:23.639854 containerd[1664]: time="2025-12-12T18:29:23.639391509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-grwhm,Uid:a11bd055-cf72-461b-9c3e-d8c8d4421dd2,Namespace:kube-system,Attempt:0,}" Dec 12 18:29:23.658483 containerd[1664]: time="2025-12-12T18:29:23.658405187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-cwqst,Uid:1566f111-6981-4bf6-b05d-69ebc0c0ffaa,Namespace:calico-system,Attempt:0,}" Dec 12 18:29:23.703144 containerd[1664]: time="2025-12-12T18:29:23.703077230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84669789bf-xx9l4,Uid:1d148975-a25d-444b-b4af-f56c99e82a44,Namespace:calico-system,Attempt:0,}" Dec 12 18:29:23.822217 containerd[1664]: time="2025-12-12T18:29:23.821991075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 18:29:23.979550 containerd[1664]: time="2025-12-12T18:29:23.979369162Z" level=error msg="Failed to destroy network for sandbox \"7ea5c8b86348f0087f2887b37f4485466d51b357c001c82c5e16cba3bb5508e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:23.982138 containerd[1664]: time="2025-12-12T18:29:23.982052353Z" level=error msg="Failed to destroy network for sandbox \"1ab94c507346b30486672f806d41d5b2997bf50ae4da093be901ab49d04c100b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:23.982596 containerd[1664]: time="2025-12-12T18:29:23.982418464Z" level=error msg="Failed to destroy network for sandbox \"79370fd79a56db49294382422a8b896baf1eca8a81a687b1cf24e68fa72e5212\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:23.988997 containerd[1664]: time="2025-12-12T18:29:23.983636134Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lsgz8,Uid:6315f58f-593a-4458-a6a7-81537b517426,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ea5c8b86348f0087f2887b37f4485466d51b357c001c82c5e16cba3bb5508e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:23.989832 containerd[1664]: time="2025-12-12T18:29:23.988869510Z" level=error msg="Failed to destroy network for sandbox \"0cd230038a584285c9c13288ff8d0e9a9d4e0ec2a4a87d6f4a018e9d5d3f8c31\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:23.990393 containerd[1664]: time="2025-12-12T18:29:23.988906935Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84669789bf-xx9l4,Uid:1d148975-a25d-444b-b4af-f56c99e82a44,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ab94c507346b30486672f806d41d5b2997bf50ae4da093be901ab49d04c100b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:23.998198 containerd[1664]: time="2025-12-12T18:29:23.998106920Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-cwqst,Uid:1566f111-6981-4bf6-b05d-69ebc0c0ffaa,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"79370fd79a56db49294382422a8b896baf1eca8a81a687b1cf24e68fa72e5212\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:23.999455 kubelet[2966]: E1212 18:29:23.999373 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ab94c507346b30486672f806d41d5b2997bf50ae4da093be901ab49d04c100b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:24.000021 kubelet[2966]: E1212 18:29:23.999585 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ea5c8b86348f0087f2887b37f4485466d51b357c001c82c5e16cba3bb5508e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:24.000021 kubelet[2966]: E1212 18:29:23.999631 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ea5c8b86348f0087f2887b37f4485466d51b357c001c82c5e16cba3bb5508e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lsgz8" Dec 12 18:29:24.000021 kubelet[2966]: E1212 18:29:23.999690 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ea5c8b86348f0087f2887b37f4485466d51b357c001c82c5e16cba3bb5508e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lsgz8" Dec 12 18:29:24.000021 kubelet[2966]: E1212 18:29:23.999832 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ab94c507346b30486672f806d41d5b2997bf50ae4da093be901ab49d04c100b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84669789bf-xx9l4" Dec 12 18:29:24.000408 kubelet[2966]: E1212 18:29:23.999866 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ab94c507346b30486672f806d41d5b2997bf50ae4da093be901ab49d04c100b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84669789bf-xx9l4" Dec 12 18:29:24.000408 kubelet[2966]: E1212 18:29:24.000084 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lsgz8_kube-system(6315f58f-593a-4458-a6a7-81537b517426)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lsgz8_kube-system(6315f58f-593a-4458-a6a7-81537b517426)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ea5c8b86348f0087f2887b37f4485466d51b357c001c82c5e16cba3bb5508e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lsgz8" podUID="6315f58f-593a-4458-a6a7-81537b517426" Dec 12 18:29:24.000408 kubelet[2966]: E1212 18:29:23.999909 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84669789bf-xx9l4_calico-system(1d148975-a25d-444b-b4af-f56c99e82a44)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84669789bf-xx9l4_calico-system(1d148975-a25d-444b-b4af-f56c99e82a44)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ab94c507346b30486672f806d41d5b2997bf50ae4da093be901ab49d04c100b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84669789bf-xx9l4" podUID="1d148975-a25d-444b-b4af-f56c99e82a44" Dec 12 18:29:24.000809 kubelet[2966]: E1212 18:29:24.000350 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79370fd79a56db49294382422a8b896baf1eca8a81a687b1cf24e68fa72e5212\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:24.000809 kubelet[2966]: E1212 18:29:24.000391 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79370fd79a56db49294382422a8b896baf1eca8a81a687b1cf24e68fa72e5212\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-cwqst" Dec 12 18:29:24.000809 kubelet[2966]: E1212 18:29:24.000427 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79370fd79a56db49294382422a8b896baf1eca8a81a687b1cf24e68fa72e5212\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-cwqst" Dec 12 18:29:24.002120 kubelet[2966]: E1212 18:29:24.000470 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-cwqst_calico-system(1566f111-6981-4bf6-b05d-69ebc0c0ffaa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-cwqst_calico-system(1566f111-6981-4bf6-b05d-69ebc0c0ffaa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79370fd79a56db49294382422a8b896baf1eca8a81a687b1cf24e68fa72e5212\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-cwqst" podUID="1566f111-6981-4bf6-b05d-69ebc0c0ffaa" Dec 12 18:29:24.002120 kubelet[2966]: E1212 18:29:24.002002 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cd230038a584285c9c13288ff8d0e9a9d4e0ec2a4a87d6f4a018e9d5d3f8c31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:24.002314 containerd[1664]: time="2025-12-12T18:29:24.001658334Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-grwhm,Uid:a11bd055-cf72-461b-9c3e-d8c8d4421dd2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cd230038a584285c9c13288ff8d0e9a9d4e0ec2a4a87d6f4a018e9d5d3f8c31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:24.002442 kubelet[2966]: E1212 18:29:24.002087 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cd230038a584285c9c13288ff8d0e9a9d4e0ec2a4a87d6f4a018e9d5d3f8c31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-grwhm" Dec 12 18:29:24.002442 kubelet[2966]: E1212 18:29:24.002193 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cd230038a584285c9c13288ff8d0e9a9d4e0ec2a4a87d6f4a018e9d5d3f8c31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-grwhm" Dec 12 18:29:24.002442 kubelet[2966]: E1212 18:29:24.002398 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-grwhm_kube-system(a11bd055-cf72-461b-9c3e-d8c8d4421dd2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-grwhm_kube-system(a11bd055-cf72-461b-9c3e-d8c8d4421dd2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0cd230038a584285c9c13288ff8d0e9a9d4e0ec2a4a87d6f4a018e9d5d3f8c31\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-grwhm" podUID="a11bd055-cf72-461b-9c3e-d8c8d4421dd2" Dec 12 18:29:24.506193 systemd[1]: Created slice kubepods-besteffort-pod30dcceea_b67a_4ecb_b6c6_16baeb5ae67c.slice - libcontainer container kubepods-besteffort-pod30dcceea_b67a_4ecb_b6c6_16baeb5ae67c.slice. Dec 12 18:29:24.507565 kubelet[2966]: E1212 18:29:24.506153 2966 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Dec 12 18:29:24.507565 kubelet[2966]: E1212 18:29:24.506375 2966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d20e72ff-82d7-435d-b1ff-5952de8d6823-calico-apiserver-certs podName:d20e72ff-82d7-435d-b1ff-5952de8d6823 nodeName:}" failed. No retries permitted until 2025-12-12 18:29:25.006306858 +0000 UTC m=+40.738124559 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/d20e72ff-82d7-435d-b1ff-5952de8d6823-calico-apiserver-certs") pod "calico-apiserver-557c4cb5b5-4c5jw" (UID: "d20e72ff-82d7-435d-b1ff-5952de8d6823") : failed to sync secret cache: timed out waiting for the condition Dec 12 18:29:24.509057 kubelet[2966]: E1212 18:29:24.508837 2966 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Dec 12 18:29:24.509057 kubelet[2966]: E1212 18:29:24.508890 2966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dcf3ef86-77cd-46ff-befa-f79857cd4570-calico-apiserver-certs podName:dcf3ef86-77cd-46ff-befa-f79857cd4570 nodeName:}" failed. No retries permitted until 2025-12-12 18:29:25.008876107 +0000 UTC m=+40.740693812 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/dcf3ef86-77cd-46ff-befa-f79857cd4570-calico-apiserver-certs") pod "calico-apiserver-557c4cb5b5-v5k7t" (UID: "dcf3ef86-77cd-46ff-befa-f79857cd4570") : failed to sync secret cache: timed out waiting for the condition Dec 12 18:29:24.509057 kubelet[2966]: E1212 18:29:24.508934 2966 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Dec 12 18:29:24.509057 kubelet[2966]: E1212 18:29:24.508995 2966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b-whisker-ca-bundle podName:2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b nodeName:}" failed. No retries permitted until 2025-12-12 18:29:25.008972453 +0000 UTC m=+40.740790155 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b-whisker-ca-bundle") pod "whisker-7d4f547dfc-vzz6v" (UID: "2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b") : failed to sync configmap cache: timed out waiting for the condition Dec 12 18:29:24.511350 kubelet[2966]: E1212 18:29:24.511229 2966 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition Dec 12 18:29:24.511350 kubelet[2966]: E1212 18:29:24.511293 2966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b-whisker-backend-key-pair podName:2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b nodeName:}" failed. No retries permitted until 2025-12-12 18:29:25.011277457 +0000 UTC m=+40.743095158 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b-whisker-backend-key-pair") pod "whisker-7d4f547dfc-vzz6v" (UID: "2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b") : failed to sync secret cache: timed out waiting for the condition Dec 12 18:29:24.512281 containerd[1664]: time="2025-12-12T18:29:24.512236415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mzv84,Uid:30dcceea-b67a-4ecb-b6c6-16baeb5ae67c,Namespace:calico-system,Attempt:0,}" Dec 12 18:29:24.521145 kubelet[2966]: E1212 18:29:24.520771 2966 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 12 18:29:24.521145 kubelet[2966]: E1212 18:29:24.520953 2966 projected.go:194] Error preparing data for projected volume kube-api-access-6szg9 for pod calico-apiserver/calico-apiserver-557c4cb5b5-v5k7t: failed to sync configmap cache: timed out waiting for the condition Dec 12 18:29:24.521145 kubelet[2966]: E1212 18:29:24.521053 2966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dcf3ef86-77cd-46ff-befa-f79857cd4570-kube-api-access-6szg9 podName:dcf3ef86-77cd-46ff-befa-f79857cd4570 nodeName:}" failed. No retries permitted until 2025-12-12 18:29:25.021033315 +0000 UTC m=+40.752851016 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6szg9" (UniqueName: "kubernetes.io/projected/dcf3ef86-77cd-46ff-befa-f79857cd4570-kube-api-access-6szg9") pod "calico-apiserver-557c4cb5b5-v5k7t" (UID: "dcf3ef86-77cd-46ff-befa-f79857cd4570") : failed to sync configmap cache: timed out waiting for the condition Dec 12 18:29:24.547436 kubelet[2966]: E1212 18:29:24.547341 2966 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 12 18:29:24.548751 kubelet[2966]: E1212 18:29:24.547668 2966 projected.go:194] Error preparing data for projected volume kube-api-access-jqr7h for pod calico-apiserver/calico-apiserver-557c4cb5b5-4c5jw: failed to sync configmap cache: timed out waiting for the condition Dec 12 18:29:24.548751 kubelet[2966]: E1212 18:29:24.547778 2966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d20e72ff-82d7-435d-b1ff-5952de8d6823-kube-api-access-jqr7h podName:d20e72ff-82d7-435d-b1ff-5952de8d6823 nodeName:}" failed. No retries permitted until 2025-12-12 18:29:25.047750221 +0000 UTC m=+40.779567920 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jqr7h" (UniqueName: "kubernetes.io/projected/d20e72ff-82d7-435d-b1ff-5952de8d6823-kube-api-access-jqr7h") pod "calico-apiserver-557c4cb5b5-4c5jw" (UID: "d20e72ff-82d7-435d-b1ff-5952de8d6823") : failed to sync configmap cache: timed out waiting for the condition Dec 12 18:29:24.608335 containerd[1664]: time="2025-12-12T18:29:24.608233441Z" level=error msg="Failed to destroy network for sandbox \"86fb97ccd7ecae62e551f1b09da2b3825dcf7a8cd9224c706b072f3121900b67\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:24.612928 systemd[1]: run-netns-cni\x2de787525b\x2dae08\x2d2f3c\x2dc776\x2d54c99323ca46.mount: Deactivated successfully. Dec 12 18:29:24.616671 containerd[1664]: time="2025-12-12T18:29:24.616617806Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mzv84,Uid:30dcceea-b67a-4ecb-b6c6-16baeb5ae67c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"86fb97ccd7ecae62e551f1b09da2b3825dcf7a8cd9224c706b072f3121900b67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:24.617215 kubelet[2966]: E1212 18:29:24.617071 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86fb97ccd7ecae62e551f1b09da2b3825dcf7a8cd9224c706b072f3121900b67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:24.617461 kubelet[2966]: E1212 18:29:24.617142 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86fb97ccd7ecae62e551f1b09da2b3825dcf7a8cd9224c706b072f3121900b67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mzv84" Dec 12 18:29:24.618071 kubelet[2966]: E1212 18:29:24.617572 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86fb97ccd7ecae62e551f1b09da2b3825dcf7a8cd9224c706b072f3121900b67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mzv84" Dec 12 18:29:24.618071 kubelet[2966]: E1212 18:29:24.617653 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mzv84_calico-system(30dcceea-b67a-4ecb-b6c6-16baeb5ae67c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mzv84_calico-system(30dcceea-b67a-4ecb-b6c6-16baeb5ae67c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86fb97ccd7ecae62e551f1b09da2b3825dcf7a8cd9224c706b072f3121900b67\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mzv84" podUID="30dcceea-b67a-4ecb-b6c6-16baeb5ae67c" Dec 12 18:29:25.179112 containerd[1664]: time="2025-12-12T18:29:25.179053255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-557c4cb5b5-4c5jw,Uid:d20e72ff-82d7-435d-b1ff-5952de8d6823,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:29:25.191763 containerd[1664]: time="2025-12-12T18:29:25.191690164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-557c4cb5b5-v5k7t,Uid:dcf3ef86-77cd-46ff-befa-f79857cd4570,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:29:25.218097 containerd[1664]: time="2025-12-12T18:29:25.217889174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d4f547dfc-vzz6v,Uid:2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b,Namespace:calico-system,Attempt:0,}" Dec 12 18:29:25.353189 containerd[1664]: time="2025-12-12T18:29:25.351355168Z" level=error msg="Failed to destroy network for sandbox \"e7ca63479fac676ad97cb36f151de80a8c95b01cdaf3ad8921db4335059aa826\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:25.355537 systemd[1]: run-netns-cni\x2dda9e6cbb\x2dc0f6\x2d159b\x2d8d23\x2d847111032889.mount: Deactivated successfully. Dec 12 18:29:25.358470 containerd[1664]: time="2025-12-12T18:29:25.358299221Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-557c4cb5b5-v5k7t,Uid:dcf3ef86-77cd-46ff-befa-f79857cd4570,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7ca63479fac676ad97cb36f151de80a8c95b01cdaf3ad8921db4335059aa826\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:25.360284 kubelet[2966]: E1212 18:29:25.359665 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7ca63479fac676ad97cb36f151de80a8c95b01cdaf3ad8921db4335059aa826\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:25.360284 kubelet[2966]: E1212 18:29:25.359812 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7ca63479fac676ad97cb36f151de80a8c95b01cdaf3ad8921db4335059aa826\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-557c4cb5b5-v5k7t" Dec 12 18:29:25.360284 kubelet[2966]: E1212 18:29:25.359850 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7ca63479fac676ad97cb36f151de80a8c95b01cdaf3ad8921db4335059aa826\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-557c4cb5b5-v5k7t" Dec 12 18:29:25.361721 kubelet[2966]: E1212 18:29:25.359922 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-557c4cb5b5-v5k7t_calico-apiserver(dcf3ef86-77cd-46ff-befa-f79857cd4570)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-557c4cb5b5-v5k7t_calico-apiserver(dcf3ef86-77cd-46ff-befa-f79857cd4570)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7ca63479fac676ad97cb36f151de80a8c95b01cdaf3ad8921db4335059aa826\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-557c4cb5b5-v5k7t" podUID="dcf3ef86-77cd-46ff-befa-f79857cd4570" Dec 12 18:29:25.366520 containerd[1664]: time="2025-12-12T18:29:25.364836528Z" level=error msg="Failed to destroy network for sandbox \"7a50ed7d129445894890f38539e2973a48be739e17f6a287024e7649079d5002\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:25.370314 systemd[1]: run-netns-cni\x2d340317f3\x2dc959\x2dbbf4\x2d47cb\x2d5cb75be1c25f.mount: Deactivated successfully. Dec 12 18:29:25.373212 containerd[1664]: time="2025-12-12T18:29:25.372507000Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-557c4cb5b5-4c5jw,Uid:d20e72ff-82d7-435d-b1ff-5952de8d6823,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a50ed7d129445894890f38539e2973a48be739e17f6a287024e7649079d5002\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:25.373620 kubelet[2966]: E1212 18:29:25.373567 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a50ed7d129445894890f38539e2973a48be739e17f6a287024e7649079d5002\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:25.373705 kubelet[2966]: E1212 18:29:25.373648 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a50ed7d129445894890f38539e2973a48be739e17f6a287024e7649079d5002\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-557c4cb5b5-4c5jw" Dec 12 18:29:25.373705 kubelet[2966]: E1212 18:29:25.373687 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a50ed7d129445894890f38539e2973a48be739e17f6a287024e7649079d5002\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-557c4cb5b5-4c5jw" Dec 12 18:29:25.374320 kubelet[2966]: E1212 18:29:25.373765 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-557c4cb5b5-4c5jw_calico-apiserver(d20e72ff-82d7-435d-b1ff-5952de8d6823)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-557c4cb5b5-4c5jw_calico-apiserver(d20e72ff-82d7-435d-b1ff-5952de8d6823)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a50ed7d129445894890f38539e2973a48be739e17f6a287024e7649079d5002\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-557c4cb5b5-4c5jw" podUID="d20e72ff-82d7-435d-b1ff-5952de8d6823" Dec 12 18:29:25.396533 containerd[1664]: time="2025-12-12T18:29:25.396467906Z" level=error msg="Failed to destroy network for sandbox \"dfbc2cd5643d28cfc9cabaeabcc35bbd118f5c627767c34e5b3031f9fe7ce289\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:25.399832 systemd[1]: run-netns-cni\x2dbc79e4f0\x2ddb67\x2d10a5\x2daff2\x2d0a48a225a6f8.mount: Deactivated successfully. Dec 12 18:29:25.404467 containerd[1664]: time="2025-12-12T18:29:25.404235742Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d4f547dfc-vzz6v,Uid:2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfbc2cd5643d28cfc9cabaeabcc35bbd118f5c627767c34e5b3031f9fe7ce289\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:25.405250 kubelet[2966]: E1212 18:29:25.405197 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfbc2cd5643d28cfc9cabaeabcc35bbd118f5c627767c34e5b3031f9fe7ce289\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:25.405652 kubelet[2966]: E1212 18:29:25.405282 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfbc2cd5643d28cfc9cabaeabcc35bbd118f5c627767c34e5b3031f9fe7ce289\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d4f547dfc-vzz6v" Dec 12 18:29:25.405652 kubelet[2966]: E1212 18:29:25.405326 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfbc2cd5643d28cfc9cabaeabcc35bbd118f5c627767c34e5b3031f9fe7ce289\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d4f547dfc-vzz6v" Dec 12 18:29:25.405652 kubelet[2966]: E1212 18:29:25.405419 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7d4f547dfc-vzz6v_calico-system(2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7d4f547dfc-vzz6v_calico-system(2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dfbc2cd5643d28cfc9cabaeabcc35bbd118f5c627767c34e5b3031f9fe7ce289\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7d4f547dfc-vzz6v" podUID="2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b" Dec 12 18:29:33.091194 kubelet[2966]: I1212 18:29:33.091059 2966 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 18:29:33.339808 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 12 18:29:33.340635 kernel: audit: type=1325 audit(1765564173.333:573): table=filter:119 family=2 entries=21 op=nft_register_rule pid=3943 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:33.333000 audit[3943]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3943 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:33.348894 kernel: audit: type=1300 audit(1765564173.333:573): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe45aff360 a2=0 a3=7ffe45aff34c items=0 ppid=3085 pid=3943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:33.348969 kernel: audit: type=1327 audit(1765564173.333:573): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:33.333000 audit[3943]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe45aff360 a2=0 a3=7ffe45aff34c items=0 ppid=3085 pid=3943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:33.333000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:33.346000 audit[3943]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3943 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:33.346000 audit[3943]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe45aff360 a2=0 a3=7ffe45aff34c items=0 ppid=3085 pid=3943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:33.358476 kernel: audit: type=1325 audit(1765564173.346:574): table=nat:120 family=2 entries=19 op=nft_register_chain pid=3943 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:33.358545 kernel: audit: type=1300 audit(1765564173.346:574): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe45aff360 a2=0 a3=7ffe45aff34c items=0 ppid=3085 pid=3943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:33.346000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:33.369192 kernel: audit: type=1327 audit(1765564173.346:574): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:35.496197 containerd[1664]: time="2025-12-12T18:29:35.496013680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84669789bf-xx9l4,Uid:1d148975-a25d-444b-b4af-f56c99e82a44,Namespace:calico-system,Attempt:0,}" Dec 12 18:29:35.689716 containerd[1664]: time="2025-12-12T18:29:35.689464398Z" level=error msg="Failed to destroy network for sandbox \"5d84d8494c092b04081493acef611b68c2486a6f1917e9b8e39711bef069867a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:35.694638 systemd[1]: run-netns-cni\x2d0799ed2b\x2d7b0e\x2de4e7\x2dc0e0\x2d680ed6926535.mount: Deactivated successfully. Dec 12 18:29:35.698550 containerd[1664]: time="2025-12-12T18:29:35.697005175Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84669789bf-xx9l4,Uid:1d148975-a25d-444b-b4af-f56c99e82a44,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d84d8494c092b04081493acef611b68c2486a6f1917e9b8e39711bef069867a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:35.699557 kubelet[2966]: E1212 18:29:35.699492 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d84d8494c092b04081493acef611b68c2486a6f1917e9b8e39711bef069867a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:35.701043 kubelet[2966]: E1212 18:29:35.700411 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d84d8494c092b04081493acef611b68c2486a6f1917e9b8e39711bef069867a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84669789bf-xx9l4" Dec 12 18:29:35.701043 kubelet[2966]: E1212 18:29:35.700464 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d84d8494c092b04081493acef611b68c2486a6f1917e9b8e39711bef069867a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84669789bf-xx9l4" Dec 12 18:29:35.701043 kubelet[2966]: E1212 18:29:35.700541 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84669789bf-xx9l4_calico-system(1d148975-a25d-444b-b4af-f56c99e82a44)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84669789bf-xx9l4_calico-system(1d148975-a25d-444b-b4af-f56c99e82a44)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d84d8494c092b04081493acef611b68c2486a6f1917e9b8e39711bef069867a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84669789bf-xx9l4" podUID="1d148975-a25d-444b-b4af-f56c99e82a44" Dec 12 18:29:37.110868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3885816448.mount: Deactivated successfully. Dec 12 18:29:37.156811 containerd[1664]: time="2025-12-12T18:29:37.155777964Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:29:37.182753 containerd[1664]: time="2025-12-12T18:29:37.182663481Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Dec 12 18:29:37.211211 containerd[1664]: time="2025-12-12T18:29:37.209874134Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:29:37.213119 containerd[1664]: time="2025-12-12T18:29:37.213075171Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:29:37.214057 containerd[1664]: time="2025-12-12T18:29:37.214004721Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 13.391924101s" Dec 12 18:29:37.227082 containerd[1664]: time="2025-12-12T18:29:37.227042508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 12 18:29:37.270076 containerd[1664]: time="2025-12-12T18:29:37.270024168Z" level=info msg="CreateContainer within sandbox \"a7876bc8c0267a0dacdbd744bbe5b67b60683f85e8b46ef6e015c48d160fa97a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 18:29:37.343425 containerd[1664]: time="2025-12-12T18:29:37.343358301Z" level=info msg="Container b1a68d22c8cbc80df29b9a0bc5e14de3c92966cb9265b33b3e1942ce3ff840d7: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:29:37.347120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount965235454.mount: Deactivated successfully. Dec 12 18:29:37.411765 containerd[1664]: time="2025-12-12T18:29:37.411508611Z" level=info msg="CreateContainer within sandbox \"a7876bc8c0267a0dacdbd744bbe5b67b60683f85e8b46ef6e015c48d160fa97a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b1a68d22c8cbc80df29b9a0bc5e14de3c92966cb9265b33b3e1942ce3ff840d7\"" Dec 12 18:29:37.414135 containerd[1664]: time="2025-12-12T18:29:37.414093821Z" level=info msg="StartContainer for \"b1a68d22c8cbc80df29b9a0bc5e14de3c92966cb9265b33b3e1942ce3ff840d7\"" Dec 12 18:29:37.416675 containerd[1664]: time="2025-12-12T18:29:37.416631715Z" level=info msg="connecting to shim b1a68d22c8cbc80df29b9a0bc5e14de3c92966cb9265b33b3e1942ce3ff840d7" address="unix:///run/containerd/s/aa7faf6a9d034be055d5993e594b780b399212432b3adbb9cb3d5662289e0e63" protocol=ttrpc version=3 Dec 12 18:29:37.520921 containerd[1664]: time="2025-12-12T18:29:37.520510641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-grwhm,Uid:a11bd055-cf72-461b-9c3e-d8c8d4421dd2,Namespace:kube-system,Attempt:0,}" Dec 12 18:29:37.521685 containerd[1664]: time="2025-12-12T18:29:37.521652637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lsgz8,Uid:6315f58f-593a-4458-a6a7-81537b517426,Namespace:kube-system,Attempt:0,}" Dec 12 18:29:37.561718 containerd[1664]: time="2025-12-12T18:29:37.560365937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-557c4cb5b5-v5k7t,Uid:dcf3ef86-77cd-46ff-befa-f79857cd4570,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:29:37.569418 systemd[1]: Started cri-containerd-b1a68d22c8cbc80df29b9a0bc5e14de3c92966cb9265b33b3e1942ce3ff840d7.scope - libcontainer container b1a68d22c8cbc80df29b9a0bc5e14de3c92966cb9265b33b3e1942ce3ff840d7. Dec 12 18:29:37.835980 containerd[1664]: time="2025-12-12T18:29:37.835523941Z" level=error msg="Failed to destroy network for sandbox \"90a9d948ea29a4770c25ed8dbcc421058fcbcebf18640825be008b0ba686d22d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:37.840048 containerd[1664]: time="2025-12-12T18:29:37.839997149Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-grwhm,Uid:a11bd055-cf72-461b-9c3e-d8c8d4421dd2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"90a9d948ea29a4770c25ed8dbcc421058fcbcebf18640825be008b0ba686d22d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:37.847450 kubelet[2966]: E1212 18:29:37.847229 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90a9d948ea29a4770c25ed8dbcc421058fcbcebf18640825be008b0ba686d22d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:37.848790 kubelet[2966]: E1212 18:29:37.848241 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90a9d948ea29a4770c25ed8dbcc421058fcbcebf18640825be008b0ba686d22d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-grwhm" Dec 12 18:29:37.848790 kubelet[2966]: E1212 18:29:37.848299 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90a9d948ea29a4770c25ed8dbcc421058fcbcebf18640825be008b0ba686d22d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-grwhm" Dec 12 18:29:37.848790 kubelet[2966]: E1212 18:29:37.848618 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-grwhm_kube-system(a11bd055-cf72-461b-9c3e-d8c8d4421dd2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-grwhm_kube-system(a11bd055-cf72-461b-9c3e-d8c8d4421dd2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90a9d948ea29a4770c25ed8dbcc421058fcbcebf18640825be008b0ba686d22d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-grwhm" podUID="a11bd055-cf72-461b-9c3e-d8c8d4421dd2" Dec 12 18:29:37.871104 containerd[1664]: time="2025-12-12T18:29:37.870699200Z" level=error msg="Failed to destroy network for sandbox \"c8242a0780cad3cedfab21ad68fd411f7e304a0856a6d068da507f14d44794c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:37.872767 containerd[1664]: time="2025-12-12T18:29:37.872628346Z" level=error msg="Failed to destroy network for sandbox \"eb348b244dcc03b661fda80209d44fac9624a66a2c23c8a648c2c89161319961\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:37.877732 containerd[1664]: time="2025-12-12T18:29:37.877484794Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lsgz8,Uid:6315f58f-593a-4458-a6a7-81537b517426,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb348b244dcc03b661fda80209d44fac9624a66a2c23c8a648c2c89161319961\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:37.877874 kubelet[2966]: E1212 18:29:37.877796 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb348b244dcc03b661fda80209d44fac9624a66a2c23c8a648c2c89161319961\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:37.877972 kubelet[2966]: E1212 18:29:37.877876 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb348b244dcc03b661fda80209d44fac9624a66a2c23c8a648c2c89161319961\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lsgz8" Dec 12 18:29:37.877972 kubelet[2966]: E1212 18:29:37.877910 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb348b244dcc03b661fda80209d44fac9624a66a2c23c8a648c2c89161319961\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lsgz8" Dec 12 18:29:37.878092 kubelet[2966]: E1212 18:29:37.877973 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lsgz8_kube-system(6315f58f-593a-4458-a6a7-81537b517426)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lsgz8_kube-system(6315f58f-593a-4458-a6a7-81537b517426)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb348b244dcc03b661fda80209d44fac9624a66a2c23c8a648c2c89161319961\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lsgz8" podUID="6315f58f-593a-4458-a6a7-81537b517426" Dec 12 18:29:37.880357 containerd[1664]: time="2025-12-12T18:29:37.880218312Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-557c4cb5b5-v5k7t,Uid:dcf3ef86-77cd-46ff-befa-f79857cd4570,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8242a0780cad3cedfab21ad68fd411f7e304a0856a6d068da507f14d44794c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:37.880953 kubelet[2966]: E1212 18:29:37.880697 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8242a0780cad3cedfab21ad68fd411f7e304a0856a6d068da507f14d44794c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:29:37.880953 kubelet[2966]: E1212 18:29:37.880792 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8242a0780cad3cedfab21ad68fd411f7e304a0856a6d068da507f14d44794c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-557c4cb5b5-v5k7t" Dec 12 18:29:37.880953 kubelet[2966]: E1212 18:29:37.880868 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8242a0780cad3cedfab21ad68fd411f7e304a0856a6d068da507f14d44794c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-557c4cb5b5-v5k7t" Dec 12 18:29:37.881123 kubelet[2966]: E1212 18:29:37.880982 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-557c4cb5b5-v5k7t_calico-apiserver(dcf3ef86-77cd-46ff-befa-f79857cd4570)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-557c4cb5b5-v5k7t_calico-apiserver(dcf3ef86-77cd-46ff-befa-f79857cd4570)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8242a0780cad3cedfab21ad68fd411f7e304a0856a6d068da507f14d44794c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-557c4cb5b5-v5k7t" podUID="dcf3ef86-77cd-46ff-befa-f79857cd4570" Dec 12 18:29:37.903000 audit: BPF prog-id=176 op=LOAD Dec 12 18:29:37.911405 kernel: audit: type=1334 audit(1765564177.903:575): prog-id=176 op=LOAD Dec 12 18:29:37.903000 audit[3974]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3449 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:37.920680 kernel: audit: type=1300 audit(1765564177.903:575): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3449 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:37.922638 kernel: audit: type=1327 audit(1765564177.903:575): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231613638643232633863626338306466323962396130626335653134 Dec 12 18:29:37.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231613638643232633863626338306466323962396130626335653134 Dec 12 18:29:37.910000 audit: BPF prog-id=177 op=LOAD Dec 12 18:29:37.931482 kernel: audit: type=1334 audit(1765564177.910:576): prog-id=177 op=LOAD Dec 12 18:29:37.910000 audit[3974]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3449 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:37.910000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231613638643232633863626338306466323962396130626335653134 Dec 12 18:29:37.910000 audit: BPF prog-id=177 op=UNLOAD Dec 12 18:29:37.910000 audit[3974]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3449 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:37.910000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231613638643232633863626338306466323962396130626335653134 Dec 12 18:29:37.910000 audit: BPF prog-id=176 op=UNLOAD Dec 12 18:29:37.910000 audit[3974]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3449 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:37.910000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231613638643232633863626338306466323962396130626335653134 Dec 12 18:29:37.910000 audit: BPF prog-id=178 op=LOAD Dec 12 18:29:37.910000 audit[3974]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3449 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:37.910000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231613638643232633863626338306466323962396130626335653134 Dec 12 18:29:37.972794 containerd[1664]: time="2025-12-12T18:29:37.972554125Z" level=info msg="StartContainer for \"b1a68d22c8cbc80df29b9a0bc5e14de3c92966cb9265b33b3e1942ce3ff840d7\" returns successfully" Dec 12 18:29:38.294617 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 18:29:38.295585 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 18:29:38.498380 containerd[1664]: time="2025-12-12T18:29:38.497840870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d4f547dfc-vzz6v,Uid:2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b,Namespace:calico-system,Attempt:0,}" Dec 12 18:29:38.498380 containerd[1664]: time="2025-12-12T18:29:38.498258949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-cwqst,Uid:1566f111-6981-4bf6-b05d-69ebc0c0ffaa,Namespace:calico-system,Attempt:0,}" Dec 12 18:29:39.207966 kubelet[2966]: I1212 18:29:39.187803 2966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7htd9" podStartSLOduration=3.026642234 podStartE2EDuration="31.186887357s" podCreationTimestamp="2025-12-12 18:29:08 +0000 UTC" firstStartedPulling="2025-12-12 18:29:09.06820404 +0000 UTC m=+24.800021741" lastFinishedPulling="2025-12-12 18:29:37.228449157 +0000 UTC m=+52.960266864" observedRunningTime="2025-12-12 18:29:39.185770932 +0000 UTC m=+54.917588659" watchObservedRunningTime="2025-12-12 18:29:39.186887357 +0000 UTC m=+54.918705067" Dec 12 18:29:39.406424 systemd-networkd[1573]: cali5de095e412a: Link UP Dec 12 18:29:39.412996 systemd-networkd[1573]: cali5de095e412a: Gained carrier Dec 12 18:29:39.493675 containerd[1664]: 2025-12-12 18:29:38.621 [INFO][4097] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:29:39.493675 containerd[1664]: 2025-12-12 18:29:38.729 [INFO][4097] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--vv1nl.gb1.brightbox.com-k8s-goldmane--666569f655--cwqst-eth0 goldmane-666569f655- calico-system 1566f111-6981-4bf6-b05d-69ebc0c0ffaa 848 0 2025-12-12 18:29:06 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-vv1nl.gb1.brightbox.com goldmane-666569f655-cwqst eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5de095e412a [] [] }} ContainerID="56f650f63faaea3982db6fc697cca4f929245f9b628b0fec0dcc4241b412ecaa" Namespace="calico-system" Pod="goldmane-666569f655-cwqst" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-goldmane--666569f655--cwqst-" Dec 12 18:29:39.493675 containerd[1664]: 2025-12-12 18:29:38.729 [INFO][4097] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="56f650f63faaea3982db6fc697cca4f929245f9b628b0fec0dcc4241b412ecaa" Namespace="calico-system" Pod="goldmane-666569f655-cwqst" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-goldmane--666569f655--cwqst-eth0" Dec 12 18:29:39.493675 containerd[1664]: 2025-12-12 18:29:39.111 [INFO][4123] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="56f650f63faaea3982db6fc697cca4f929245f9b628b0fec0dcc4241b412ecaa" HandleID="k8s-pod-network.56f650f63faaea3982db6fc697cca4f929245f9b628b0fec0dcc4241b412ecaa" Workload="srv--vv1nl.gb1.brightbox.com-k8s-goldmane--666569f655--cwqst-eth0" Dec 12 18:29:39.495630 containerd[1664]: 2025-12-12 18:29:39.117 [INFO][4123] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="56f650f63faaea3982db6fc697cca4f929245f9b628b0fec0dcc4241b412ecaa" HandleID="k8s-pod-network.56f650f63faaea3982db6fc697cca4f929245f9b628b0fec0dcc4241b412ecaa" Workload="srv--vv1nl.gb1.brightbox.com-k8s-goldmane--666569f655--cwqst-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00060e380), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-vv1nl.gb1.brightbox.com", "pod":"goldmane-666569f655-cwqst", "timestamp":"2025-12-12 18:29:39.111454227 +0000 UTC"}, Hostname:"srv-vv1nl.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:29:39.495630 containerd[1664]: 2025-12-12 18:29:39.117 [INFO][4123] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:29:39.495630 containerd[1664]: 2025-12-12 18:29:39.117 [INFO][4123] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:29:39.495630 containerd[1664]: 2025-12-12 18:29:39.120 [INFO][4123] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-vv1nl.gb1.brightbox.com' Dec 12 18:29:39.495630 containerd[1664]: 2025-12-12 18:29:39.163 [INFO][4123] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.56f650f63faaea3982db6fc697cca4f929245f9b628b0fec0dcc4241b412ecaa" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:39.495630 containerd[1664]: 2025-12-12 18:29:39.214 [INFO][4123] ipam/ipam.go 394: Looking up existing affinities for host host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:39.495630 containerd[1664]: 2025-12-12 18:29:39.247 [INFO][4123] ipam/ipam.go 543: Ran out of existing affine blocks for host host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:39.495630 containerd[1664]: 2025-12-12 18:29:39.254 [INFO][4123] ipam/ipam.go 560: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:39.495630 containerd[1664]: 2025-12-12 18:29:39.261 [INFO][4123] ipam/ipam_block_reader_writer.go 158: Found free block: 192.168.62.0/26 Dec 12 18:29:39.497972 containerd[1664]: 2025-12-12 18:29:39.261 [INFO][4123] ipam/ipam.go 572: Found unclaimed block host="srv-vv1nl.gb1.brightbox.com" subnet=192.168.62.0/26 Dec 12 18:29:39.497972 containerd[1664]: 2025-12-12 18:29:39.261 [INFO][4123] ipam/ipam_block_reader_writer.go 175: Trying to create affinity in pending state host="srv-vv1nl.gb1.brightbox.com" subnet=192.168.62.0/26 Dec 12 18:29:39.497972 containerd[1664]: 2025-12-12 18:29:39.281 [INFO][4123] ipam/ipam_block_reader_writer.go 205: Successfully created pending affinity for block host="srv-vv1nl.gb1.brightbox.com" subnet=192.168.62.0/26 Dec 12 18:29:39.497972 containerd[1664]: 2025-12-12 18:29:39.281 [INFO][4123] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.0/26 host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:39.497972 containerd[1664]: 2025-12-12 18:29:39.284 [INFO][4123] ipam/ipam.go 163: The referenced block doesn't exist, trying to create it cidr=192.168.62.0/26 host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:39.497972 containerd[1664]: 2025-12-12 18:29:39.293 [INFO][4123] ipam/ipam.go 170: Wrote affinity as pending cidr=192.168.62.0/26 host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:39.497972 containerd[1664]: 2025-12-12 18:29:39.298 [INFO][4123] ipam/ipam.go 179: Attempting to claim the block cidr=192.168.62.0/26 host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:39.497972 containerd[1664]: 2025-12-12 18:29:39.298 [INFO][4123] ipam/ipam_block_reader_writer.go 226: Attempting to create a new block affinityType="host" host="srv-vv1nl.gb1.brightbox.com" subnet=192.168.62.0/26 Dec 12 18:29:39.497972 containerd[1664]: 2025-12-12 18:29:39.314 [INFO][4123] ipam/ipam_block_reader_writer.go 267: Successfully created block Dec 12 18:29:39.497972 containerd[1664]: 2025-12-12 18:29:39.314 [INFO][4123] ipam/ipam_block_reader_writer.go 283: Confirming affinity host="srv-vv1nl.gb1.brightbox.com" subnet=192.168.62.0/26 Dec 12 18:29:39.497972 containerd[1664]: 2025-12-12 18:29:39.334 [INFO][4123] ipam/ipam_block_reader_writer.go 298: Successfully confirmed affinity host="srv-vv1nl.gb1.brightbox.com" subnet=192.168.62.0/26 Dec 12 18:29:39.497972 containerd[1664]: 2025-12-12 18:29:39.334 [INFO][4123] ipam/ipam.go 607: Block '192.168.62.0/26' has 64 free ips which is more than 1 ips required. host="srv-vv1nl.gb1.brightbox.com" subnet=192.168.62.0/26 Dec 12 18:29:39.502325 containerd[1664]: 2025-12-12 18:29:39.334 [INFO][4123] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.62.0/26 handle="k8s-pod-network.56f650f63faaea3982db6fc697cca4f929245f9b628b0fec0dcc4241b412ecaa" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:39.502325 containerd[1664]: 2025-12-12 18:29:39.338 [INFO][4123] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.56f650f63faaea3982db6fc697cca4f929245f9b628b0fec0dcc4241b412ecaa Dec 12 18:29:39.502325 containerd[1664]: 2025-12-12 18:29:39.345 [INFO][4123] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.62.0/26 handle="k8s-pod-network.56f650f63faaea3982db6fc697cca4f929245f9b628b0fec0dcc4241b412ecaa" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:39.502325 containerd[1664]: 2025-12-12 18:29:39.355 [INFO][4123] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.62.0/26] block=192.168.62.0/26 handle="k8s-pod-network.56f650f63faaea3982db6fc697cca4f929245f9b628b0fec0dcc4241b412ecaa" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:39.502325 containerd[1664]: 2025-12-12 18:29:39.355 [INFO][4123] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.0/26] handle="k8s-pod-network.56f650f63faaea3982db6fc697cca4f929245f9b628b0fec0dcc4241b412ecaa" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:39.502325 containerd[1664]: 2025-12-12 18:29:39.355 [INFO][4123] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:29:39.502325 containerd[1664]: 2025-12-12 18:29:39.355 [INFO][4123] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.62.0/26] IPv6=[] ContainerID="56f650f63faaea3982db6fc697cca4f929245f9b628b0fec0dcc4241b412ecaa" HandleID="k8s-pod-network.56f650f63faaea3982db6fc697cca4f929245f9b628b0fec0dcc4241b412ecaa" Workload="srv--vv1nl.gb1.brightbox.com-k8s-goldmane--666569f655--cwqst-eth0" Dec 12 18:29:39.503133 containerd[1664]: 2025-12-12 18:29:39.363 [INFO][4097] cni-plugin/k8s.go 418: Populated endpoint ContainerID="56f650f63faaea3982db6fc697cca4f929245f9b628b0fec0dcc4241b412ecaa" Namespace="calico-system" Pod="goldmane-666569f655-cwqst" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-goldmane--666569f655--cwqst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vv1nl.gb1.brightbox.com-k8s-goldmane--666569f655--cwqst-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"1566f111-6981-4bf6-b05d-69ebc0c0ffaa", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 29, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vv1nl.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-666569f655-cwqst", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.62.0/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5de095e412a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:29:39.503296 containerd[1664]: 2025-12-12 18:29:39.366 [INFO][4097] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.0/32] ContainerID="56f650f63faaea3982db6fc697cca4f929245f9b628b0fec0dcc4241b412ecaa" Namespace="calico-system" Pod="goldmane-666569f655-cwqst" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-goldmane--666569f655--cwqst-eth0" Dec 12 18:29:39.503296 containerd[1664]: 2025-12-12 18:29:39.366 [INFO][4097] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5de095e412a ContainerID="56f650f63faaea3982db6fc697cca4f929245f9b628b0fec0dcc4241b412ecaa" Namespace="calico-system" Pod="goldmane-666569f655-cwqst" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-goldmane--666569f655--cwqst-eth0" Dec 12 18:29:39.503296 containerd[1664]: 2025-12-12 18:29:39.431 [INFO][4097] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="56f650f63faaea3982db6fc697cca4f929245f9b628b0fec0dcc4241b412ecaa" Namespace="calico-system" Pod="goldmane-666569f655-cwqst" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-goldmane--666569f655--cwqst-eth0" Dec 12 18:29:39.503959 containerd[1664]: 2025-12-12 18:29:39.433 [INFO][4097] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="56f650f63faaea3982db6fc697cca4f929245f9b628b0fec0dcc4241b412ecaa" Namespace="calico-system" Pod="goldmane-666569f655-cwqst" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-goldmane--666569f655--cwqst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vv1nl.gb1.brightbox.com-k8s-goldmane--666569f655--cwqst-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"1566f111-6981-4bf6-b05d-69ebc0c0ffaa", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 29, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vv1nl.gb1.brightbox.com", ContainerID:"56f650f63faaea3982db6fc697cca4f929245f9b628b0fec0dcc4241b412ecaa", Pod:"goldmane-666569f655-cwqst", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.62.0/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5de095e412a", MAC:"f6:ee:c5:fc:20:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:29:39.504198 containerd[1664]: 2025-12-12 18:29:39.489 [INFO][4097] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="56f650f63faaea3982db6fc697cca4f929245f9b628b0fec0dcc4241b412ecaa" Namespace="calico-system" Pod="goldmane-666569f655-cwqst" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-goldmane--666569f655--cwqst-eth0" Dec 12 18:29:39.506934 containerd[1664]: time="2025-12-12T18:29:39.506846879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-557c4cb5b5-4c5jw,Uid:d20e72ff-82d7-435d-b1ff-5952de8d6823,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:29:39.508136 containerd[1664]: time="2025-12-12T18:29:39.508052025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mzv84,Uid:30dcceea-b67a-4ecb-b6c6-16baeb5ae67c,Namespace:calico-system,Attempt:0,}" Dec 12 18:29:39.647950 systemd-networkd[1573]: calie4bfc6a6c17: Link UP Dec 12 18:29:39.652505 systemd-networkd[1573]: calie4bfc6a6c17: Gained carrier Dec 12 18:29:39.737590 containerd[1664]: 2025-12-12 18:29:38.633 [INFO][4101] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:29:39.737590 containerd[1664]: 2025-12-12 18:29:38.731 [INFO][4101] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--vv1nl.gb1.brightbox.com-k8s-whisker--7d4f547dfc--vzz6v-eth0 whisker-7d4f547dfc- calico-system 2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b 914 0 2025-12-12 18:29:13 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7d4f547dfc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-vv1nl.gb1.brightbox.com whisker-7d4f547dfc-vzz6v eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie4bfc6a6c17 [] [] }} ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" Namespace="calico-system" Pod="whisker-7d4f547dfc-vzz6v" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-whisker--7d4f547dfc--vzz6v-" Dec 12 18:29:39.737590 containerd[1664]: 2025-12-12 18:29:38.732 [INFO][4101] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" Namespace="calico-system" Pod="whisker-7d4f547dfc-vzz6v" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-whisker--7d4f547dfc--vzz6v-eth0" Dec 12 18:29:39.737590 containerd[1664]: 2025-12-12 18:29:39.114 [INFO][4125] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" HandleID="k8s-pod-network.dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" Workload="srv--vv1nl.gb1.brightbox.com-k8s-whisker--7d4f547dfc--vzz6v-eth0" Dec 12 18:29:39.737994 containerd[1664]: 2025-12-12 18:29:39.115 [INFO][4125] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" HandleID="k8s-pod-network.dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" Workload="srv--vv1nl.gb1.brightbox.com-k8s-whisker--7d4f547dfc--vzz6v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003740d0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-vv1nl.gb1.brightbox.com", "pod":"whisker-7d4f547dfc-vzz6v", "timestamp":"2025-12-12 18:29:39.114099951 +0000 UTC"}, Hostname:"srv-vv1nl.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:29:39.737994 containerd[1664]: 2025-12-12 18:29:39.115 [INFO][4125] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:29:39.737994 containerd[1664]: 2025-12-12 18:29:39.355 [INFO][4125] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:29:39.737994 containerd[1664]: 2025-12-12 18:29:39.355 [INFO][4125] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-vv1nl.gb1.brightbox.com' Dec 12 18:29:39.737994 containerd[1664]: 2025-12-12 18:29:39.372 [INFO][4125] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:39.737994 containerd[1664]: 2025-12-12 18:29:39.429 [INFO][4125] ipam/ipam.go 394: Looking up existing affinities for host host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:39.737994 containerd[1664]: 2025-12-12 18:29:39.448 [INFO][4125] ipam/ipam.go 511: Trying affinity for 192.168.62.0/26 host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:39.737994 containerd[1664]: 2025-12-12 18:29:39.466 [INFO][4125] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.0/26 host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:39.737994 containerd[1664]: 2025-12-12 18:29:39.473 [INFO][4125] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.0/26 host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:39.738476 containerd[1664]: 2025-12-12 18:29:39.474 [INFO][4125] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.62.0/26 handle="k8s-pod-network.dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:39.738476 containerd[1664]: 2025-12-12 18:29:39.487 [INFO][4125] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452 Dec 12 18:29:39.738476 containerd[1664]: 2025-12-12 18:29:39.502 [INFO][4125] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.62.0/26 handle="k8s-pod-network.dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:39.738476 containerd[1664]: 2025-12-12 18:29:39.556 [INFO][4125] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.62.1/26] block=192.168.62.0/26 handle="k8s-pod-network.dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:39.738476 containerd[1664]: 2025-12-12 18:29:39.558 [INFO][4125] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.1/26] handle="k8s-pod-network.dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:39.738476 containerd[1664]: 2025-12-12 18:29:39.559 [INFO][4125] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:29:39.738476 containerd[1664]: 2025-12-12 18:29:39.560 [INFO][4125] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.62.1/26] IPv6=[] ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" HandleID="k8s-pod-network.dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" Workload="srv--vv1nl.gb1.brightbox.com-k8s-whisker--7d4f547dfc--vzz6v-eth0" Dec 12 18:29:39.740646 containerd[1664]: 2025-12-12 18:29:39.624 [INFO][4101] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" Namespace="calico-system" Pod="whisker-7d4f547dfc-vzz6v" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-whisker--7d4f547dfc--vzz6v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vv1nl.gb1.brightbox.com-k8s-whisker--7d4f547dfc--vzz6v-eth0", GenerateName:"whisker-7d4f547dfc-", Namespace:"calico-system", SelfLink:"", UID:"2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 29, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7d4f547dfc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vv1nl.gb1.brightbox.com", ContainerID:"", Pod:"whisker-7d4f547dfc-vzz6v", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.62.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie4bfc6a6c17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:29:39.740646 containerd[1664]: 2025-12-12 18:29:39.627 [INFO][4101] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.1/32] ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" Namespace="calico-system" Pod="whisker-7d4f547dfc-vzz6v" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-whisker--7d4f547dfc--vzz6v-eth0" Dec 12 18:29:39.740802 containerd[1664]: 2025-12-12 18:29:39.627 [INFO][4101] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4bfc6a6c17 ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" Namespace="calico-system" Pod="whisker-7d4f547dfc-vzz6v" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-whisker--7d4f547dfc--vzz6v-eth0" Dec 12 18:29:39.740802 containerd[1664]: 2025-12-12 18:29:39.660 [INFO][4101] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" Namespace="calico-system" Pod="whisker-7d4f547dfc-vzz6v" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-whisker--7d4f547dfc--vzz6v-eth0" Dec 12 18:29:39.740998 containerd[1664]: 2025-12-12 18:29:39.674 [INFO][4101] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" Namespace="calico-system" Pod="whisker-7d4f547dfc-vzz6v" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-whisker--7d4f547dfc--vzz6v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vv1nl.gb1.brightbox.com-k8s-whisker--7d4f547dfc--vzz6v-eth0", GenerateName:"whisker-7d4f547dfc-", Namespace:"calico-system", SelfLink:"", UID:"2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 29, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7d4f547dfc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vv1nl.gb1.brightbox.com", ContainerID:"dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452", Pod:"whisker-7d4f547dfc-vzz6v", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.62.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie4bfc6a6c17", MAC:"7a:e0:55:e9:cd:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:29:39.741106 containerd[1664]: 2025-12-12 18:29:39.715 [INFO][4101] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" Namespace="calico-system" Pod="whisker-7d4f547dfc-vzz6v" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-whisker--7d4f547dfc--vzz6v-eth0" Dec 12 18:29:39.974926 containerd[1664]: time="2025-12-12T18:29:39.974743987Z" level=info msg="connecting to shim 56f650f63faaea3982db6fc697cca4f929245f9b628b0fec0dcc4241b412ecaa" address="unix:///run/containerd/s/aed9277c4ed1f141de082cc15d6192148110ee88a3eba693ef86353d0e5ef5e1" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:29:39.993028 containerd[1664]: time="2025-12-12T18:29:39.992954701Z" level=info msg="connecting to shim dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" address="unix:///run/containerd/s/b47c73503dbdc6b9aad3573cedfbaf85ed834ecce01fdc254d1e13c215bd5fe5" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:29:40.106718 systemd[1]: Started cri-containerd-56f650f63faaea3982db6fc697cca4f929245f9b628b0fec0dcc4241b412ecaa.scope - libcontainer container 56f650f63faaea3982db6fc697cca4f929245f9b628b0fec0dcc4241b412ecaa. Dec 12 18:29:40.128463 systemd[1]: Started cri-containerd-dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452.scope - libcontainer container dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452. Dec 12 18:29:40.178036 systemd-networkd[1573]: cali044e7c85f36: Link UP Dec 12 18:29:40.188990 systemd-networkd[1573]: cali044e7c85f36: Gained carrier Dec 12 18:29:40.216418 containerd[1664]: 2025-12-12 18:29:39.813 [INFO][4173] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:29:40.216418 containerd[1664]: 2025-12-12 18:29:39.849 [INFO][4173] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--vv1nl.gb1.brightbox.com-k8s-csi--node--driver--mzv84-eth0 csi-node-driver- calico-system 30dcceea-b67a-4ecb-b6c6-16baeb5ae67c 727 0 2025-12-12 18:29:08 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-vv1nl.gb1.brightbox.com csi-node-driver-mzv84 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali044e7c85f36 [] [] }} ContainerID="9d517f111dc3b21c2bb65077e9e471bdee01e73ebff4e250727ecf2e92ea65c4" Namespace="calico-system" Pod="csi-node-driver-mzv84" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-csi--node--driver--mzv84-" Dec 12 18:29:40.216418 containerd[1664]: 2025-12-12 18:29:39.849 [INFO][4173] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9d517f111dc3b21c2bb65077e9e471bdee01e73ebff4e250727ecf2e92ea65c4" Namespace="calico-system" Pod="csi-node-driver-mzv84" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-csi--node--driver--mzv84-eth0" Dec 12 18:29:40.216418 containerd[1664]: 2025-12-12 18:29:40.032 [INFO][4221] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d517f111dc3b21c2bb65077e9e471bdee01e73ebff4e250727ecf2e92ea65c4" HandleID="k8s-pod-network.9d517f111dc3b21c2bb65077e9e471bdee01e73ebff4e250727ecf2e92ea65c4" Workload="srv--vv1nl.gb1.brightbox.com-k8s-csi--node--driver--mzv84-eth0" Dec 12 18:29:40.217959 containerd[1664]: 2025-12-12 18:29:40.034 [INFO][4221] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9d517f111dc3b21c2bb65077e9e471bdee01e73ebff4e250727ecf2e92ea65c4" HandleID="k8s-pod-network.9d517f111dc3b21c2bb65077e9e471bdee01e73ebff4e250727ecf2e92ea65c4" Workload="srv--vv1nl.gb1.brightbox.com-k8s-csi--node--driver--mzv84-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000392f40), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-vv1nl.gb1.brightbox.com", "pod":"csi-node-driver-mzv84", "timestamp":"2025-12-12 18:29:40.032726923 +0000 UTC"}, Hostname:"srv-vv1nl.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:29:40.217959 containerd[1664]: 2025-12-12 18:29:40.035 [INFO][4221] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:29:40.217959 containerd[1664]: 2025-12-12 18:29:40.035 [INFO][4221] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:29:40.217959 containerd[1664]: 2025-12-12 18:29:40.038 [INFO][4221] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-vv1nl.gb1.brightbox.com' Dec 12 18:29:40.217959 containerd[1664]: 2025-12-12 18:29:40.059 [INFO][4221] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9d517f111dc3b21c2bb65077e9e471bdee01e73ebff4e250727ecf2e92ea65c4" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:40.217959 containerd[1664]: 2025-12-12 18:29:40.072 [INFO][4221] ipam/ipam.go 394: Looking up existing affinities for host host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:40.217959 containerd[1664]: 2025-12-12 18:29:40.091 [INFO][4221] ipam/ipam.go 511: Trying affinity for 192.168.62.0/26 host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:40.217959 containerd[1664]: 2025-12-12 18:29:40.096 [INFO][4221] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.0/26 host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:40.217959 containerd[1664]: 2025-12-12 18:29:40.109 [INFO][4221] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.0/26 host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:40.219666 containerd[1664]: 2025-12-12 18:29:40.111 [INFO][4221] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.62.0/26 handle="k8s-pod-network.9d517f111dc3b21c2bb65077e9e471bdee01e73ebff4e250727ecf2e92ea65c4" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:40.219666 containerd[1664]: 2025-12-12 18:29:40.116 [INFO][4221] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9d517f111dc3b21c2bb65077e9e471bdee01e73ebff4e250727ecf2e92ea65c4 Dec 12 18:29:40.219666 containerd[1664]: 2025-12-12 18:29:40.134 [INFO][4221] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.62.0/26 handle="k8s-pod-network.9d517f111dc3b21c2bb65077e9e471bdee01e73ebff4e250727ecf2e92ea65c4" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:40.219666 containerd[1664]: 2025-12-12 18:29:40.155 [INFO][4221] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.62.3/26] block=192.168.62.0/26 handle="k8s-pod-network.9d517f111dc3b21c2bb65077e9e471bdee01e73ebff4e250727ecf2e92ea65c4" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:40.219666 containerd[1664]: 2025-12-12 18:29:40.155 [INFO][4221] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.3/26] handle="k8s-pod-network.9d517f111dc3b21c2bb65077e9e471bdee01e73ebff4e250727ecf2e92ea65c4" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:40.219666 containerd[1664]: 2025-12-12 18:29:40.157 [INFO][4221] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:29:40.219666 containerd[1664]: 2025-12-12 18:29:40.158 [INFO][4221] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.62.3/26] IPv6=[] ContainerID="9d517f111dc3b21c2bb65077e9e471bdee01e73ebff4e250727ecf2e92ea65c4" HandleID="k8s-pod-network.9d517f111dc3b21c2bb65077e9e471bdee01e73ebff4e250727ecf2e92ea65c4" Workload="srv--vv1nl.gb1.brightbox.com-k8s-csi--node--driver--mzv84-eth0" Dec 12 18:29:40.221186 containerd[1664]: 2025-12-12 18:29:40.166 [INFO][4173] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9d517f111dc3b21c2bb65077e9e471bdee01e73ebff4e250727ecf2e92ea65c4" Namespace="calico-system" Pod="csi-node-driver-mzv84" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-csi--node--driver--mzv84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vv1nl.gb1.brightbox.com-k8s-csi--node--driver--mzv84-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"30dcceea-b67a-4ecb-b6c6-16baeb5ae67c", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 29, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vv1nl.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-mzv84", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali044e7c85f36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:29:40.221295 containerd[1664]: 2025-12-12 18:29:40.167 [INFO][4173] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.3/32] ContainerID="9d517f111dc3b21c2bb65077e9e471bdee01e73ebff4e250727ecf2e92ea65c4" Namespace="calico-system" Pod="csi-node-driver-mzv84" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-csi--node--driver--mzv84-eth0" Dec 12 18:29:40.221295 containerd[1664]: 2025-12-12 18:29:40.168 [INFO][4173] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali044e7c85f36 ContainerID="9d517f111dc3b21c2bb65077e9e471bdee01e73ebff4e250727ecf2e92ea65c4" Namespace="calico-system" Pod="csi-node-driver-mzv84" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-csi--node--driver--mzv84-eth0" Dec 12 18:29:40.221295 containerd[1664]: 2025-12-12 18:29:40.188 [INFO][4173] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d517f111dc3b21c2bb65077e9e471bdee01e73ebff4e250727ecf2e92ea65c4" Namespace="calico-system" Pod="csi-node-driver-mzv84" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-csi--node--driver--mzv84-eth0" Dec 12 18:29:40.221472 containerd[1664]: 2025-12-12 18:29:40.188 [INFO][4173] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9d517f111dc3b21c2bb65077e9e471bdee01e73ebff4e250727ecf2e92ea65c4" Namespace="calico-system" Pod="csi-node-driver-mzv84" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-csi--node--driver--mzv84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vv1nl.gb1.brightbox.com-k8s-csi--node--driver--mzv84-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"30dcceea-b67a-4ecb-b6c6-16baeb5ae67c", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 29, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vv1nl.gb1.brightbox.com", ContainerID:"9d517f111dc3b21c2bb65077e9e471bdee01e73ebff4e250727ecf2e92ea65c4", Pod:"csi-node-driver-mzv84", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali044e7c85f36", MAC:"c6:00:49:99:7a:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:29:40.222835 containerd[1664]: 2025-12-12 18:29:40.209 [INFO][4173] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9d517f111dc3b21c2bb65077e9e471bdee01e73ebff4e250727ecf2e92ea65c4" Namespace="calico-system" Pod="csi-node-driver-mzv84" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-csi--node--driver--mzv84-eth0" Dec 12 18:29:40.236746 kernel: kauditd_printk_skb: 11 callbacks suppressed Dec 12 18:29:40.236933 kernel: audit: type=1334 audit(1765564180.226:580): prog-id=179 op=LOAD Dec 12 18:29:40.226000 audit: BPF prog-id=179 op=LOAD Dec 12 18:29:40.243251 kernel: audit: type=1334 audit(1765564180.238:581): prog-id=180 op=LOAD Dec 12 18:29:40.238000 audit: BPF prog-id=180 op=LOAD Dec 12 18:29:40.238000 audit[4268]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4242 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:40.254154 kernel: audit: type=1300 audit(1765564180.238:581): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4242 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:40.238000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536663635306636336661616561333938326462366663363937636361 Dec 12 18:29:40.266217 kernel: audit: type=1327 audit(1765564180.238:581): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536663635306636336661616561333938326462366663363937636361 Dec 12 18:29:40.238000 audit: BPF prog-id=180 op=UNLOAD Dec 12 18:29:40.268360 kernel: audit: type=1334 audit(1765564180.238:582): prog-id=180 op=UNLOAD Dec 12 18:29:40.274379 kernel: audit: type=1300 audit(1765564180.238:582): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4242 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:40.238000 audit[4268]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4242 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:40.282186 kernel: audit: type=1327 audit(1765564180.238:582): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536663635306636336661616561333938326462366663363937636361 Dec 12 18:29:40.238000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536663635306636336661616561333938326462366663363937636361 Dec 12 18:29:40.287242 kernel: audit: type=1334 audit(1765564180.238:583): prog-id=181 op=LOAD Dec 12 18:29:40.238000 audit: BPF prog-id=181 op=LOAD Dec 12 18:29:40.238000 audit[4268]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4242 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:40.298305 kernel: audit: type=1300 audit(1765564180.238:583): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4242 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:40.238000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536663635306636336661616561333938326462366663363937636361 Dec 12 18:29:40.322347 kernel: audit: type=1327 audit(1765564180.238:583): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536663635306636336661616561333938326462366663363937636361 Dec 12 18:29:40.238000 audit: BPF prog-id=182 op=LOAD Dec 12 18:29:40.238000 audit[4268]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4242 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:40.238000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536663635306636336661616561333938326462366663363937636361 Dec 12 18:29:40.239000 audit: BPF prog-id=182 op=UNLOAD Dec 12 18:29:40.239000 audit[4268]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4242 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:40.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536663635306636336661616561333938326462366663363937636361 Dec 12 18:29:40.239000 audit: BPF prog-id=181 op=UNLOAD Dec 12 18:29:40.239000 audit[4268]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4242 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:40.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536663635306636336661616561333938326462366663363937636361 Dec 12 18:29:40.239000 audit: BPF prog-id=183 op=LOAD Dec 12 18:29:40.239000 audit[4268]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4242 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:40.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536663635306636336661616561333938326462366663363937636361 Dec 12 18:29:40.252000 audit: BPF prog-id=184 op=LOAD Dec 12 18:29:40.260000 audit: BPF prog-id=185 op=LOAD Dec 12 18:29:40.260000 audit[4288]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000170238 a2=98 a3=0 items=0 ppid=4248 pid=4288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:40.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462636339626364393464633237313963663238626466623535343565 Dec 12 18:29:40.260000 audit: BPF prog-id=185 op=UNLOAD Dec 12 18:29:40.260000 audit[4288]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4248 pid=4288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:40.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462636339626364393464633237313963663238626466623535343565 Dec 12 18:29:40.263000 audit: BPF prog-id=186 op=LOAD Dec 12 18:29:40.263000 audit[4288]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000170488 a2=98 a3=0 items=0 ppid=4248 pid=4288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:40.263000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462636339626364393464633237313963663238626466623535343565 Dec 12 18:29:40.265000 audit: BPF prog-id=187 op=LOAD Dec 12 18:29:40.265000 audit[4288]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000170218 a2=98 a3=0 items=0 ppid=4248 pid=4288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:40.265000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462636339626364393464633237313963663238626466623535343565 Dec 12 18:29:40.269000 audit: BPF prog-id=187 op=UNLOAD Dec 12 18:29:40.269000 audit[4288]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4248 pid=4288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:40.269000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462636339626364393464633237313963663238626466623535343565 Dec 12 18:29:40.269000 audit: BPF prog-id=186 op=UNLOAD Dec 12 18:29:40.269000 audit[4288]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4248 pid=4288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:40.269000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462636339626364393464633237313963663238626466623535343565 Dec 12 18:29:40.272000 audit: BPF prog-id=188 op=LOAD Dec 12 18:29:40.272000 audit[4288]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001706e8 a2=98 a3=0 items=0 ppid=4248 pid=4288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:40.272000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462636339626364393464633237313963663238626466623535343565 Dec 12 18:29:40.372176 systemd-networkd[1573]: cali74143edf22e: Link UP Dec 12 18:29:40.377441 systemd-networkd[1573]: cali74143edf22e: Gained carrier Dec 12 18:29:40.381062 containerd[1664]: time="2025-12-12T18:29:40.380727749Z" level=info msg="connecting to shim 9d517f111dc3b21c2bb65077e9e471bdee01e73ebff4e250727ecf2e92ea65c4" address="unix:///run/containerd/s/a119c1026f30420410c8a25a7e829d47c3699a2e6fb4465e9cd1f64c91ea7654" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:29:40.451695 systemd[1]: Started cri-containerd-9d517f111dc3b21c2bb65077e9e471bdee01e73ebff4e250727ecf2e92ea65c4.scope - libcontainer container 9d517f111dc3b21c2bb65077e9e471bdee01e73ebff4e250727ecf2e92ea65c4. Dec 12 18:29:40.471113 containerd[1664]: 2025-12-12 18:29:39.846 [INFO][4179] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:29:40.471113 containerd[1664]: 2025-12-12 18:29:39.894 [INFO][4179] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--vv1nl.gb1.brightbox.com-k8s-calico--apiserver--557c4cb5b5--4c5jw-eth0 calico-apiserver-557c4cb5b5- calico-apiserver d20e72ff-82d7-435d-b1ff-5952de8d6823 841 0 2025-12-12 18:29:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:557c4cb5b5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-vv1nl.gb1.brightbox.com calico-apiserver-557c4cb5b5-4c5jw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali74143edf22e [] [] }} ContainerID="162eea7ecd13b5a459a7f31e6d65350b66d4a89ec3f0d7ed69a6a2b515487914" Namespace="calico-apiserver" Pod="calico-apiserver-557c4cb5b5-4c5jw" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-calico--apiserver--557c4cb5b5--4c5jw-" Dec 12 18:29:40.471113 containerd[1664]: 2025-12-12 18:29:39.894 [INFO][4179] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="162eea7ecd13b5a459a7f31e6d65350b66d4a89ec3f0d7ed69a6a2b515487914" Namespace="calico-apiserver" Pod="calico-apiserver-557c4cb5b5-4c5jw" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-calico--apiserver--557c4cb5b5--4c5jw-eth0" Dec 12 18:29:40.471113 containerd[1664]: 2025-12-12 18:29:40.079 [INFO][4227] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="162eea7ecd13b5a459a7f31e6d65350b66d4a89ec3f0d7ed69a6a2b515487914" HandleID="k8s-pod-network.162eea7ecd13b5a459a7f31e6d65350b66d4a89ec3f0d7ed69a6a2b515487914" Workload="srv--vv1nl.gb1.brightbox.com-k8s-calico--apiserver--557c4cb5b5--4c5jw-eth0" Dec 12 18:29:40.471640 containerd[1664]: 2025-12-12 18:29:40.079 [INFO][4227] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="162eea7ecd13b5a459a7f31e6d65350b66d4a89ec3f0d7ed69a6a2b515487914" HandleID="k8s-pod-network.162eea7ecd13b5a459a7f31e6d65350b66d4a89ec3f0d7ed69a6a2b515487914" Workload="srv--vv1nl.gb1.brightbox.com-k8s-calico--apiserver--557c4cb5b5--4c5jw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033e360), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-vv1nl.gb1.brightbox.com", "pod":"calico-apiserver-557c4cb5b5-4c5jw", "timestamp":"2025-12-12 18:29:40.079275299 +0000 UTC"}, Hostname:"srv-vv1nl.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:29:40.471640 containerd[1664]: 2025-12-12 18:29:40.079 [INFO][4227] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:29:40.471640 containerd[1664]: 2025-12-12 18:29:40.156 [INFO][4227] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:29:40.471640 containerd[1664]: 2025-12-12 18:29:40.156 [INFO][4227] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-vv1nl.gb1.brightbox.com' Dec 12 18:29:40.471640 containerd[1664]: 2025-12-12 18:29:40.180 [INFO][4227] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.162eea7ecd13b5a459a7f31e6d65350b66d4a89ec3f0d7ed69a6a2b515487914" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:40.471640 containerd[1664]: 2025-12-12 18:29:40.212 [INFO][4227] ipam/ipam.go 394: Looking up existing affinities for host host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:40.471640 containerd[1664]: 2025-12-12 18:29:40.248 [INFO][4227] ipam/ipam.go 511: Trying affinity for 192.168.62.0/26 host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:40.471640 containerd[1664]: 2025-12-12 18:29:40.266 [INFO][4227] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.0/26 host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:40.471640 containerd[1664]: 2025-12-12 18:29:40.289 [INFO][4227] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.0/26 host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:40.472063 containerd[1664]: 2025-12-12 18:29:40.289 [INFO][4227] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.62.0/26 handle="k8s-pod-network.162eea7ecd13b5a459a7f31e6d65350b66d4a89ec3f0d7ed69a6a2b515487914" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:40.472063 containerd[1664]: 2025-12-12 18:29:40.300 [INFO][4227] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.162eea7ecd13b5a459a7f31e6d65350b66d4a89ec3f0d7ed69a6a2b515487914 Dec 12 18:29:40.472063 containerd[1664]: 2025-12-12 18:29:40.324 [INFO][4227] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.62.0/26 handle="k8s-pod-network.162eea7ecd13b5a459a7f31e6d65350b66d4a89ec3f0d7ed69a6a2b515487914" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:40.472063 containerd[1664]: 2025-12-12 18:29:40.349 [INFO][4227] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.62.4/26] block=192.168.62.0/26 handle="k8s-pod-network.162eea7ecd13b5a459a7f31e6d65350b66d4a89ec3f0d7ed69a6a2b515487914" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:40.472063 containerd[1664]: 2025-12-12 18:29:40.349 [INFO][4227] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.4/26] handle="k8s-pod-network.162eea7ecd13b5a459a7f31e6d65350b66d4a89ec3f0d7ed69a6a2b515487914" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:40.472063 containerd[1664]: 2025-12-12 18:29:40.349 [INFO][4227] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:29:40.472063 containerd[1664]: 2025-12-12 18:29:40.350 [INFO][4227] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.62.4/26] IPv6=[] ContainerID="162eea7ecd13b5a459a7f31e6d65350b66d4a89ec3f0d7ed69a6a2b515487914" HandleID="k8s-pod-network.162eea7ecd13b5a459a7f31e6d65350b66d4a89ec3f0d7ed69a6a2b515487914" Workload="srv--vv1nl.gb1.brightbox.com-k8s-calico--apiserver--557c4cb5b5--4c5jw-eth0" Dec 12 18:29:40.475548 containerd[1664]: 2025-12-12 18:29:40.358 [INFO][4179] cni-plugin/k8s.go 418: Populated endpoint ContainerID="162eea7ecd13b5a459a7f31e6d65350b66d4a89ec3f0d7ed69a6a2b515487914" Namespace="calico-apiserver" Pod="calico-apiserver-557c4cb5b5-4c5jw" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-calico--apiserver--557c4cb5b5--4c5jw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vv1nl.gb1.brightbox.com-k8s-calico--apiserver--557c4cb5b5--4c5jw-eth0", GenerateName:"calico-apiserver-557c4cb5b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"d20e72ff-82d7-435d-b1ff-5952de8d6823", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 29, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"557c4cb5b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vv1nl.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-557c4cb5b5-4c5jw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali74143edf22e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:29:40.475680 containerd[1664]: 2025-12-12 18:29:40.359 [INFO][4179] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.4/32] ContainerID="162eea7ecd13b5a459a7f31e6d65350b66d4a89ec3f0d7ed69a6a2b515487914" Namespace="calico-apiserver" Pod="calico-apiserver-557c4cb5b5-4c5jw" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-calico--apiserver--557c4cb5b5--4c5jw-eth0" Dec 12 18:29:40.475680 containerd[1664]: 2025-12-12 18:29:40.359 [INFO][4179] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali74143edf22e ContainerID="162eea7ecd13b5a459a7f31e6d65350b66d4a89ec3f0d7ed69a6a2b515487914" Namespace="calico-apiserver" Pod="calico-apiserver-557c4cb5b5-4c5jw" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-calico--apiserver--557c4cb5b5--4c5jw-eth0" Dec 12 18:29:40.475680 containerd[1664]: 2025-12-12 18:29:40.392 [INFO][4179] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="162eea7ecd13b5a459a7f31e6d65350b66d4a89ec3f0d7ed69a6a2b515487914" Namespace="calico-apiserver" Pod="calico-apiserver-557c4cb5b5-4c5jw" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-calico--apiserver--557c4cb5b5--4c5jw-eth0" Dec 12 18:29:40.476416 containerd[1664]: 2025-12-12 18:29:40.394 [INFO][4179] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="162eea7ecd13b5a459a7f31e6d65350b66d4a89ec3f0d7ed69a6a2b515487914" Namespace="calico-apiserver" Pod="calico-apiserver-557c4cb5b5-4c5jw" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-calico--apiserver--557c4cb5b5--4c5jw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vv1nl.gb1.brightbox.com-k8s-calico--apiserver--557c4cb5b5--4c5jw-eth0", GenerateName:"calico-apiserver-557c4cb5b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"d20e72ff-82d7-435d-b1ff-5952de8d6823", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 29, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"557c4cb5b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vv1nl.gb1.brightbox.com", ContainerID:"162eea7ecd13b5a459a7f31e6d65350b66d4a89ec3f0d7ed69a6a2b515487914", Pod:"calico-apiserver-557c4cb5b5-4c5jw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali74143edf22e", MAC:"ee:b9:80:48:70:e5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:29:40.476516 containerd[1664]: 2025-12-12 18:29:40.460 [INFO][4179] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="162eea7ecd13b5a459a7f31e6d65350b66d4a89ec3f0d7ed69a6a2b515487914" Namespace="calico-apiserver" Pod="calico-apiserver-557c4cb5b5-4c5jw" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-calico--apiserver--557c4cb5b5--4c5jw-eth0" Dec 12 18:29:40.501000 audit: BPF prog-id=189 op=LOAD Dec 12 18:29:40.504000 audit: BPF prog-id=190 op=LOAD Dec 12 18:29:40.504000 audit[4363]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4351 pid=4363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:40.504000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964353137663131316463336232316332626236353037376539653437 Dec 12 18:29:40.504000 audit: BPF prog-id=190 op=UNLOAD Dec 12 18:29:40.504000 audit[4363]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4351 pid=4363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:40.504000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964353137663131316463336232316332626236353037376539653437 Dec 12 18:29:40.504000 audit: BPF prog-id=191 op=LOAD Dec 12 18:29:40.504000 audit[4363]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4351 pid=4363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:40.504000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964353137663131316463336232316332626236353037376539653437 Dec 12 18:29:40.505000 audit: BPF prog-id=192 op=LOAD Dec 12 18:29:40.505000 audit[4363]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4351 pid=4363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:40.505000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964353137663131316463336232316332626236353037376539653437 Dec 12 18:29:40.505000 audit: BPF prog-id=192 op=UNLOAD Dec 12 18:29:40.505000 audit[4363]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4351 pid=4363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:40.505000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964353137663131316463336232316332626236353037376539653437 Dec 12 18:29:40.505000 audit: BPF prog-id=191 op=UNLOAD Dec 12 18:29:40.505000 audit[4363]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4351 pid=4363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:40.505000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964353137663131316463336232316332626236353037376539653437 Dec 12 18:29:40.505000 audit: BPF prog-id=193 op=LOAD Dec 12 18:29:40.505000 audit[4363]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4351 pid=4363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:40.505000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964353137663131316463336232316332626236353037376539653437 Dec 12 18:29:40.565728 containerd[1664]: time="2025-12-12T18:29:40.565567231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-cwqst,Uid:1566f111-6981-4bf6-b05d-69ebc0c0ffaa,Namespace:calico-system,Attempt:0,} returns sandbox id \"56f650f63faaea3982db6fc697cca4f929245f9b628b0fec0dcc4241b412ecaa\"" Dec 12 18:29:40.607857 containerd[1664]: time="2025-12-12T18:29:40.607789241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:29:40.612276 containerd[1664]: time="2025-12-12T18:29:40.611853578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mzv84,Uid:30dcceea-b67a-4ecb-b6c6-16baeb5ae67c,Namespace:calico-system,Attempt:0,} returns sandbox id \"9d517f111dc3b21c2bb65077e9e471bdee01e73ebff4e250727ecf2e92ea65c4\"" Dec 12 18:29:40.630818 containerd[1664]: time="2025-12-12T18:29:40.630764423Z" level=info msg="connecting to shim 162eea7ecd13b5a459a7f31e6d65350b66d4a89ec3f0d7ed69a6a2b515487914" address="unix:///run/containerd/s/b97542dbfbb2fb0397e8b1673178e6e7c85de6be70a45621d6d92b2a95b812c6" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:29:40.631883 containerd[1664]: time="2025-12-12T18:29:40.631847562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d4f547dfc-vzz6v,Uid:2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b,Namespace:calico-system,Attempt:0,} returns sandbox id \"dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452\"" Dec 12 18:29:40.686516 systemd[1]: Started cri-containerd-162eea7ecd13b5a459a7f31e6d65350b66d4a89ec3f0d7ed69a6a2b515487914.scope - libcontainer container 162eea7ecd13b5a459a7f31e6d65350b66d4a89ec3f0d7ed69a6a2b515487914. Dec 12 18:29:40.701457 systemd-networkd[1573]: cali5de095e412a: Gained IPv6LL Dec 12 18:29:40.706000 audit: BPF prog-id=194 op=LOAD Dec 12 18:29:40.707000 audit: BPF prog-id=195 op=LOAD Dec 12 18:29:40.707000 audit[4423]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4411 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:40.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136326565613765636431336235613435396137663331653664363533 Dec 12 18:29:40.707000 audit: BPF prog-id=195 op=UNLOAD Dec 12 18:29:40.707000 audit[4423]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4411 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:40.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136326565613765636431336235613435396137663331653664363533 Dec 12 18:29:40.707000 audit: BPF prog-id=196 op=LOAD Dec 12 18:29:40.707000 audit[4423]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4411 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:40.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136326565613765636431336235613435396137663331653664363533 Dec 12 18:29:40.707000 audit: BPF prog-id=197 op=LOAD Dec 12 18:29:40.707000 audit[4423]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4411 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:40.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136326565613765636431336235613435396137663331653664363533 Dec 12 18:29:40.707000 audit: BPF prog-id=197 op=UNLOAD Dec 12 18:29:40.707000 audit[4423]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4411 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:40.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136326565613765636431336235613435396137663331653664363533 Dec 12 18:29:40.707000 audit: BPF prog-id=196 op=UNLOAD Dec 12 18:29:40.707000 audit[4423]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4411 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:40.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136326565613765636431336235613435396137663331653664363533 Dec 12 18:29:40.707000 audit: BPF prog-id=198 op=LOAD Dec 12 18:29:40.707000 audit[4423]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4411 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:40.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136326565613765636431336235613435396137663331653664363533 Dec 12 18:29:40.764225 containerd[1664]: time="2025-12-12T18:29:40.763512220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-557c4cb5b5-4c5jw,Uid:d20e72ff-82d7-435d-b1ff-5952de8d6823,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"162eea7ecd13b5a459a7f31e6d65350b66d4a89ec3f0d7ed69a6a2b515487914\"" Dec 12 18:29:40.949565 containerd[1664]: time="2025-12-12T18:29:40.949482304Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:29:40.950649 containerd[1664]: time="2025-12-12T18:29:40.950547791Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:29:40.950649 containerd[1664]: time="2025-12-12T18:29:40.950606254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 18:29:40.954149 kubelet[2966]: E1212 18:29:40.954069 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:29:40.956265 kubelet[2966]: E1212 18:29:40.956111 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:29:40.957676 containerd[1664]: time="2025-12-12T18:29:40.957378534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:29:40.967571 kubelet[2966]: E1212 18:29:40.966595 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mw84z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-cwqst_calico-system(1566f111-6981-4bf6-b05d-69ebc0c0ffaa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:29:40.971215 kubelet[2966]: E1212 18:29:40.970677 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwqst" podUID="1566f111-6981-4bf6-b05d-69ebc0c0ffaa" Dec 12 18:29:40.987661 kubelet[2966]: E1212 18:29:40.987420 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwqst" podUID="1566f111-6981-4bf6-b05d-69ebc0c0ffaa" Dec 12 18:29:41.157000 audit[4531]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=4531 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:41.157000 audit[4531]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd293ce8d0 a2=0 a3=7ffd293ce8bc items=0 ppid=3085 pid=4531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:41.157000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:41.161000 audit[4531]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=4531 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:41.161000 audit[4531]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd293ce8d0 a2=0 a3=0 items=0 ppid=3085 pid=4531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:41.161000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:41.279533 containerd[1664]: time="2025-12-12T18:29:41.279445802Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:29:41.281415 containerd[1664]: time="2025-12-12T18:29:41.281354100Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:29:41.281574 containerd[1664]: time="2025-12-12T18:29:41.281525069Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 18:29:41.283381 kubelet[2966]: E1212 18:29:41.283300 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:29:41.283499 kubelet[2966]: E1212 18:29:41.283407 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:29:41.283894 kubelet[2966]: E1212 18:29:41.283770 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j9hcc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mzv84_calico-system(30dcceea-b67a-4ecb-b6c6-16baeb5ae67c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:29:41.284150 containerd[1664]: time="2025-12-12T18:29:41.284113614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:29:41.340794 systemd-networkd[1573]: calie4bfc6a6c17: Gained IPv6LL Dec 12 18:29:41.616679 containerd[1664]: time="2025-12-12T18:29:41.616565019Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:29:41.618854 containerd[1664]: time="2025-12-12T18:29:41.618487903Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:29:41.619229 containerd[1664]: time="2025-12-12T18:29:41.618580833Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 18:29:41.619498 kubelet[2966]: E1212 18:29:41.619436 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:29:41.619877 kubelet[2966]: E1212 18:29:41.619620 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:29:41.620363 kubelet[2966]: E1212 18:29:41.619984 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:54e2ece65d4e4e24bb0d14e858b9af4a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lhzkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7d4f547dfc-vzz6v_calico-system(2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:29:41.621926 containerd[1664]: time="2025-12-12T18:29:41.621829130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:29:41.724344 systemd-networkd[1573]: cali74143edf22e: Gained IPv6LL Dec 12 18:29:41.818000 audit: BPF prog-id=199 op=LOAD Dec 12 18:29:41.818000 audit[4595]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd9053ab50 a2=98 a3=1fffffffffffffff items=0 ppid=4490 pid=4595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:41.818000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:29:41.818000 audit: BPF prog-id=199 op=UNLOAD Dec 12 18:29:41.818000 audit[4595]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd9053ab20 a3=0 items=0 ppid=4490 pid=4595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:41.818000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:29:41.819000 audit: BPF prog-id=200 op=LOAD Dec 12 18:29:41.819000 audit[4595]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd9053aa30 a2=94 a3=3 items=0 ppid=4490 pid=4595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:41.819000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:29:41.819000 audit: BPF prog-id=200 op=UNLOAD Dec 12 18:29:41.819000 audit[4595]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd9053aa30 a2=94 a3=3 items=0 ppid=4490 pid=4595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:41.819000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:29:41.819000 audit: BPF prog-id=201 op=LOAD Dec 12 18:29:41.819000 audit[4595]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd9053aa70 a2=94 a3=7ffd9053ac50 items=0 ppid=4490 pid=4595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:41.819000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:29:41.820000 audit: BPF prog-id=201 op=UNLOAD Dec 12 18:29:41.820000 audit[4595]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd9053aa70 a2=94 a3=7ffd9053ac50 items=0 ppid=4490 pid=4595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:41.820000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:29:41.829000 audit: BPF prog-id=202 op=LOAD Dec 12 18:29:41.829000 audit[4596]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe23963170 a2=98 a3=3 items=0 ppid=4490 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:41.829000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:29:41.829000 audit: BPF prog-id=202 op=UNLOAD Dec 12 18:29:41.829000 audit[4596]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe23963140 a3=0 items=0 ppid=4490 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:41.829000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:29:41.830000 audit: BPF prog-id=203 op=LOAD Dec 12 18:29:41.830000 audit[4596]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe23962f60 a2=94 a3=54428f items=0 ppid=4490 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:41.830000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:29:41.831000 audit: BPF prog-id=203 op=UNLOAD Dec 12 18:29:41.831000 audit[4596]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe23962f60 a2=94 a3=54428f items=0 ppid=4490 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:41.831000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:29:41.831000 audit: BPF prog-id=204 op=LOAD Dec 12 18:29:41.831000 audit[4596]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe23962f90 a2=94 a3=2 items=0 ppid=4490 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:41.831000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:29:41.831000 audit: BPF prog-id=204 op=UNLOAD Dec 12 18:29:41.831000 audit[4596]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe23962f90 a2=0 a3=2 items=0 ppid=4490 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:41.831000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:29:41.933399 containerd[1664]: time="2025-12-12T18:29:41.933216646Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:29:41.934422 containerd[1664]: time="2025-12-12T18:29:41.934355589Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:29:41.934510 containerd[1664]: time="2025-12-12T18:29:41.934469254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:29:41.935020 kubelet[2966]: E1212 18:29:41.934963 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:29:41.935256 kubelet[2966]: E1212 18:29:41.935155 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:29:41.936499 kubelet[2966]: E1212 18:29:41.935455 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jqr7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-557c4cb5b5-4c5jw_calico-apiserver(d20e72ff-82d7-435d-b1ff-5952de8d6823): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:29:41.937018 containerd[1664]: time="2025-12-12T18:29:41.936899252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:29:41.937644 kubelet[2966]: E1212 18:29:41.936751 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-557c4cb5b5-4c5jw" podUID="d20e72ff-82d7-435d-b1ff-5952de8d6823" Dec 12 18:29:41.991153 kubelet[2966]: E1212 18:29:41.990919 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwqst" podUID="1566f111-6981-4bf6-b05d-69ebc0c0ffaa" Dec 12 18:29:41.993748 kubelet[2966]: E1212 18:29:41.993315 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-557c4cb5b5-4c5jw" podUID="d20e72ff-82d7-435d-b1ff-5952de8d6823" Dec 12 18:29:42.069000 audit[4599]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=4599 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:42.069000 audit[4599]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe13847940 a2=0 a3=7ffe1384792c items=0 ppid=3085 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.069000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:42.075000 audit[4599]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=4599 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:42.075000 audit[4599]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe13847940 a2=0 a3=0 items=0 ppid=3085 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.075000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:42.108440 systemd-networkd[1573]: cali044e7c85f36: Gained IPv6LL Dec 12 18:29:42.145000 audit: BPF prog-id=205 op=LOAD Dec 12 18:29:42.145000 audit[4596]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe23962e50 a2=94 a3=1 items=0 ppid=4490 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.145000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:29:42.145000 audit: BPF prog-id=205 op=UNLOAD Dec 12 18:29:42.145000 audit[4596]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe23962e50 a2=94 a3=1 items=0 ppid=4490 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.145000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:29:42.161000 audit: BPF prog-id=206 op=LOAD Dec 12 18:29:42.161000 audit[4596]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe23962e40 a2=94 a3=4 items=0 ppid=4490 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.161000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:29:42.162000 audit: BPF prog-id=206 op=UNLOAD Dec 12 18:29:42.162000 audit[4596]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe23962e40 a2=0 a3=4 items=0 ppid=4490 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.162000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:29:42.162000 audit: BPF prog-id=207 op=LOAD Dec 12 18:29:42.162000 audit[4596]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe23962ca0 a2=94 a3=5 items=0 ppid=4490 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.162000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:29:42.162000 audit: BPF prog-id=207 op=UNLOAD Dec 12 18:29:42.162000 audit[4596]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe23962ca0 a2=0 a3=5 items=0 ppid=4490 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.162000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:29:42.162000 audit: BPF prog-id=208 op=LOAD Dec 12 18:29:42.162000 audit[4596]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe23962ec0 a2=94 a3=6 items=0 ppid=4490 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.162000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:29:42.164000 audit: BPF prog-id=208 op=UNLOAD Dec 12 18:29:42.164000 audit[4596]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe23962ec0 a2=0 a3=6 items=0 ppid=4490 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.164000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:29:42.164000 audit: BPF prog-id=209 op=LOAD Dec 12 18:29:42.164000 audit[4596]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe23962670 a2=94 a3=88 items=0 ppid=4490 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.164000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:29:42.165000 audit: BPF prog-id=210 op=LOAD Dec 12 18:29:42.165000 audit[4596]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffe239624f0 a2=94 a3=2 items=0 ppid=4490 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.165000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:29:42.165000 audit: BPF prog-id=210 op=UNLOAD Dec 12 18:29:42.165000 audit[4596]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffe23962520 a2=0 a3=7ffe23962620 items=0 ppid=4490 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.165000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:29:42.166000 audit: BPF prog-id=209 op=UNLOAD Dec 12 18:29:42.166000 audit[4596]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=2c96ed10 a2=0 a3=853807fcee951002 items=0 ppid=4490 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.166000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:29:42.182000 audit: BPF prog-id=211 op=LOAD Dec 12 18:29:42.182000 audit[4603]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc747e0550 a2=98 a3=1999999999999999 items=0 ppid=4490 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.182000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:29:42.182000 audit: BPF prog-id=211 op=UNLOAD Dec 12 18:29:42.182000 audit[4603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc747e0520 a3=0 items=0 ppid=4490 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.182000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:29:42.182000 audit: BPF prog-id=212 op=LOAD Dec 12 18:29:42.182000 audit[4603]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc747e0430 a2=94 a3=ffff items=0 ppid=4490 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.182000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:29:42.182000 audit: BPF prog-id=212 op=UNLOAD Dec 12 18:29:42.182000 audit[4603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc747e0430 a2=94 a3=ffff items=0 ppid=4490 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.182000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:29:42.182000 audit: BPF prog-id=213 op=LOAD Dec 12 18:29:42.182000 audit[4603]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc747e0470 a2=94 a3=7ffc747e0650 items=0 ppid=4490 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.182000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:29:42.182000 audit: BPF prog-id=213 op=UNLOAD Dec 12 18:29:42.182000 audit[4603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc747e0470 a2=94 a3=7ffc747e0650 items=0 ppid=4490 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.182000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:29:42.250455 containerd[1664]: time="2025-12-12T18:29:42.250373641Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:29:42.251503 containerd[1664]: time="2025-12-12T18:29:42.251400709Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:29:42.251744 containerd[1664]: time="2025-12-12T18:29:42.251460212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 18:29:42.252512 kubelet[2966]: E1212 18:29:42.252443 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:29:42.253257 kubelet[2966]: E1212 18:29:42.252531 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:29:42.253257 kubelet[2966]: E1212 18:29:42.252845 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j9hcc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mzv84_calico-system(30dcceea-b67a-4ecb-b6c6-16baeb5ae67c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:29:42.254734 containerd[1664]: time="2025-12-12T18:29:42.253846894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:29:42.255241 kubelet[2966]: E1212 18:29:42.254984 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzv84" podUID="30dcceea-b67a-4ecb-b6c6-16baeb5ae67c" Dec 12 18:29:42.297660 systemd-networkd[1573]: vxlan.calico: Link UP Dec 12 18:29:42.297673 systemd-networkd[1573]: vxlan.calico: Gained carrier Dec 12 18:29:42.339000 audit: BPF prog-id=214 op=LOAD Dec 12 18:29:42.339000 audit[4630]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc967e3fd0 a2=98 a3=20 items=0 ppid=4490 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.339000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:29:42.339000 audit: BPF prog-id=214 op=UNLOAD Dec 12 18:29:42.339000 audit[4630]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc967e3fa0 a3=0 items=0 ppid=4490 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.339000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:29:42.345000 audit: BPF prog-id=215 op=LOAD Dec 12 18:29:42.345000 audit[4630]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc967e3de0 a2=94 a3=54428f items=0 ppid=4490 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.345000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:29:42.346000 audit: BPF prog-id=215 op=UNLOAD Dec 12 18:29:42.346000 audit[4630]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc967e3de0 a2=94 a3=54428f items=0 ppid=4490 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.346000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:29:42.346000 audit: BPF prog-id=216 op=LOAD Dec 12 18:29:42.346000 audit[4630]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc967e3e10 a2=94 a3=2 items=0 ppid=4490 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.346000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:29:42.346000 audit: BPF prog-id=216 op=UNLOAD Dec 12 18:29:42.346000 audit[4630]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc967e3e10 a2=0 a3=2 items=0 ppid=4490 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.346000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:29:42.346000 audit: BPF prog-id=217 op=LOAD Dec 12 18:29:42.346000 audit[4630]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc967e3bc0 a2=94 a3=4 items=0 ppid=4490 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.346000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:29:42.346000 audit: BPF prog-id=217 op=UNLOAD Dec 12 18:29:42.346000 audit[4630]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc967e3bc0 a2=94 a3=4 items=0 ppid=4490 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.346000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:29:42.346000 audit: BPF prog-id=218 op=LOAD Dec 12 18:29:42.346000 audit[4630]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc967e3cc0 a2=94 a3=7ffc967e3e40 items=0 ppid=4490 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.346000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:29:42.346000 audit: BPF prog-id=218 op=UNLOAD Dec 12 18:29:42.346000 audit[4630]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc967e3cc0 a2=0 a3=7ffc967e3e40 items=0 ppid=4490 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.346000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:29:42.349000 audit: BPF prog-id=219 op=LOAD Dec 12 18:29:42.349000 audit[4630]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc967e33f0 a2=94 a3=2 items=0 ppid=4490 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.349000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:29:42.353000 audit: BPF prog-id=219 op=UNLOAD Dec 12 18:29:42.353000 audit[4630]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc967e33f0 a2=0 a3=2 items=0 ppid=4490 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.353000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:29:42.353000 audit: BPF prog-id=220 op=LOAD Dec 12 18:29:42.353000 audit[4630]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc967e34f0 a2=94 a3=30 items=0 ppid=4490 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.353000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:29:42.367000 audit: BPF prog-id=221 op=LOAD Dec 12 18:29:42.367000 audit[4636]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc01e76b30 a2=98 a3=0 items=0 ppid=4490 pid=4636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.367000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:29:42.368000 audit: BPF prog-id=221 op=UNLOAD Dec 12 18:29:42.368000 audit[4636]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc01e76b00 a3=0 items=0 ppid=4490 pid=4636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.368000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:29:42.368000 audit: BPF prog-id=222 op=LOAD Dec 12 18:29:42.368000 audit[4636]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc01e76920 a2=94 a3=54428f items=0 ppid=4490 pid=4636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.368000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:29:42.368000 audit: BPF prog-id=222 op=UNLOAD Dec 12 18:29:42.368000 audit[4636]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc01e76920 a2=94 a3=54428f items=0 ppid=4490 pid=4636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.368000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:29:42.368000 audit: BPF prog-id=223 op=LOAD Dec 12 18:29:42.368000 audit[4636]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc01e76950 a2=94 a3=2 items=0 ppid=4490 pid=4636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.368000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:29:42.369000 audit: BPF prog-id=223 op=UNLOAD Dec 12 18:29:42.369000 audit[4636]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc01e76950 a2=0 a3=2 items=0 ppid=4490 pid=4636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.369000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:29:42.559773 containerd[1664]: time="2025-12-12T18:29:42.559667587Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:29:42.560953 containerd[1664]: time="2025-12-12T18:29:42.560875256Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:29:42.561071 containerd[1664]: time="2025-12-12T18:29:42.561040441Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 18:29:42.561556 kubelet[2966]: E1212 18:29:42.561469 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:29:42.561850 kubelet[2966]: E1212 18:29:42.561808 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:29:42.562614 kubelet[2966]: E1212 18:29:42.562536 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lhzkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7d4f547dfc-vzz6v_calico-system(2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:29:42.563800 kubelet[2966]: E1212 18:29:42.563746 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7d4f547dfc-vzz6v" podUID="2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b" Dec 12 18:29:42.643000 audit: BPF prog-id=224 op=LOAD Dec 12 18:29:42.643000 audit[4636]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc01e76810 a2=94 a3=1 items=0 ppid=4490 pid=4636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.643000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:29:42.644000 audit: BPF prog-id=224 op=UNLOAD Dec 12 18:29:42.644000 audit[4636]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc01e76810 a2=94 a3=1 items=0 ppid=4490 pid=4636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.644000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:29:42.659000 audit: BPF prog-id=225 op=LOAD Dec 12 18:29:42.659000 audit[4636]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc01e76800 a2=94 a3=4 items=0 ppid=4490 pid=4636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.659000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:29:42.659000 audit: BPF prog-id=225 op=UNLOAD Dec 12 18:29:42.659000 audit[4636]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc01e76800 a2=0 a3=4 items=0 ppid=4490 pid=4636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.659000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:29:42.660000 audit: BPF prog-id=226 op=LOAD Dec 12 18:29:42.660000 audit[4636]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc01e76660 a2=94 a3=5 items=0 ppid=4490 pid=4636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.660000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:29:42.661000 audit: BPF prog-id=226 op=UNLOAD Dec 12 18:29:42.661000 audit[4636]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc01e76660 a2=0 a3=5 items=0 ppid=4490 pid=4636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.661000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:29:42.661000 audit: BPF prog-id=227 op=LOAD Dec 12 18:29:42.661000 audit[4636]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc01e76880 a2=94 a3=6 items=0 ppid=4490 pid=4636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.661000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:29:42.661000 audit: BPF prog-id=227 op=UNLOAD Dec 12 18:29:42.661000 audit[4636]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc01e76880 a2=0 a3=6 items=0 ppid=4490 pid=4636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.661000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:29:42.662000 audit: BPF prog-id=228 op=LOAD Dec 12 18:29:42.662000 audit[4636]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc01e76030 a2=94 a3=88 items=0 ppid=4490 pid=4636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.662000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:29:42.662000 audit: BPF prog-id=229 op=LOAD Dec 12 18:29:42.662000 audit[4636]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffc01e75eb0 a2=94 a3=2 items=0 ppid=4490 pid=4636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.662000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:29:42.663000 audit: BPF prog-id=229 op=UNLOAD Dec 12 18:29:42.663000 audit[4636]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffc01e75ee0 a2=0 a3=7ffc01e75fe0 items=0 ppid=4490 pid=4636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.663000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:29:42.664000 audit: BPF prog-id=228 op=UNLOAD Dec 12 18:29:42.664000 audit[4636]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=145e8d10 a2=0 a3=17c483eda543ebc8 items=0 ppid=4490 pid=4636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.664000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:29:42.672000 audit: BPF prog-id=220 op=UNLOAD Dec 12 18:29:42.672000 audit[4490]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000739a40 a2=0 a3=0 items=0 ppid=4456 pid=4490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.672000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 12 18:29:42.778000 audit[4673]: NETFILTER_CFG table=nat:125 family=2 entries=15 op=nft_register_chain pid=4673 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:29:42.778000 audit[4673]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffdc67108c0 a2=0 a3=7ffdc67108ac items=0 ppid=4490 pid=4673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.778000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:29:42.783000 audit[4674]: NETFILTER_CFG table=mangle:126 family=2 entries=16 op=nft_register_chain pid=4674 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:29:42.783000 audit[4674]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffe33dfeb40 a2=0 a3=7ffe33dfeb2c items=0 ppid=4490 pid=4674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.783000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:29:42.787000 audit[4672]: NETFILTER_CFG table=raw:127 family=2 entries=21 op=nft_register_chain pid=4672 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:29:42.787000 audit[4672]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffe3e5c5040 a2=0 a3=7ffe3e5c502c items=0 ppid=4490 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.787000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:29:42.800000 audit[4678]: NETFILTER_CFG table=filter:128 family=2 entries=200 op=nft_register_chain pid=4678 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:29:42.800000 audit[4678]: SYSCALL arch=c000003e syscall=46 success=yes exit=117380 a0=3 a1=7fffc2fd0110 a2=0 a3=7fffc2fd00fc items=0 ppid=4490 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:42.800000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:29:42.997403 containerd[1664]: time="2025-12-12T18:29:42.996829711Z" level=info msg="StopPodSandbox for \"dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452\"" Dec 12 18:29:43.005725 kubelet[2966]: E1212 18:29:43.005541 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzv84" podUID="30dcceea-b67a-4ecb-b6c6-16baeb5ae67c" Dec 12 18:29:43.022589 systemd[1]: cri-containerd-dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452.scope: Deactivated successfully. Dec 12 18:29:43.026000 audit: BPF prog-id=184 op=UNLOAD Dec 12 18:29:43.026000 audit: BPF prog-id=188 op=UNLOAD Dec 12 18:29:43.037920 containerd[1664]: time="2025-12-12T18:29:43.037827089Z" level=info msg="received sandbox exit event container_id:\"dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452\" id:\"dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452\" exit_status:137 exited_at:{seconds:1765564183 nanos:34703046}" monitor_name=podsandbox Dec 12 18:29:43.098711 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452-rootfs.mount: Deactivated successfully. Dec 12 18:29:43.103000 audit[4709]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=4709 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:43.103000 audit[4709]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdfdcfe090 a2=0 a3=7ffdfdcfe07c items=0 ppid=3085 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:43.103000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:43.107000 audit[4709]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=4709 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:43.107000 audit[4709]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdfdcfe090 a2=0 a3=0 items=0 ppid=3085 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:43.107000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:43.131472 containerd[1664]: time="2025-12-12T18:29:43.131252583Z" level=info msg="shim disconnected" id=dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452 namespace=k8s.io Dec 12 18:29:43.131472 containerd[1664]: time="2025-12-12T18:29:43.131314562Z" level=info msg="cleaning up after shim disconnected" id=dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452 namespace=k8s.io Dec 12 18:29:43.136906 containerd[1664]: time="2025-12-12T18:29:43.131330231Z" level=info msg="cleaning up dead shim" id=dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452 namespace=k8s.io Dec 12 18:29:43.203379 containerd[1664]: time="2025-12-12T18:29:43.203237133Z" level=info msg="received sandbox container exit event sandbox_id:\"dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452\" exit_status:137 exited_at:{seconds:1765564183 nanos:34703046}" monitor_name=criService Dec 12 18:29:43.206619 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452-shm.mount: Deactivated successfully. Dec 12 18:29:43.291948 systemd-networkd[1573]: calie4bfc6a6c17: Link DOWN Dec 12 18:29:43.291965 systemd-networkd[1573]: calie4bfc6a6c17: Lost carrier Dec 12 18:29:43.337000 audit[4749]: NETFILTER_CFG table=filter:131 family=2 entries=59 op=nft_register_rule pid=4749 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:29:43.337000 audit[4749]: SYSCALL arch=c000003e syscall=46 success=yes exit=5668 a0=3 a1=7fff3d9c18a0 a2=0 a3=7fff3d9c188c items=0 ppid=4490 pid=4749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:43.337000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:29:43.337000 audit[4749]: NETFILTER_CFG table=filter:132 family=2 entries=6 op=nft_unregister_chain pid=4749 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:29:43.337000 audit[4749]: SYSCALL arch=c000003e syscall=46 success=yes exit=880 a0=3 a1=7fff3d9c18a0 a2=0 a3=5580d1fa5000 items=0 ppid=4490 pid=4749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:43.337000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:29:43.432535 containerd[1664]: 2025-12-12 18:29:43.288 [INFO][4733] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" Dec 12 18:29:43.432535 containerd[1664]: 2025-12-12 18:29:43.290 [INFO][4733] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" iface="eth0" netns="/var/run/netns/cni-3d9d53e9-7b15-55c3-d4bd-a7f69a5e8cdd" Dec 12 18:29:43.432535 containerd[1664]: 2025-12-12 18:29:43.290 [INFO][4733] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" iface="eth0" netns="/var/run/netns/cni-3d9d53e9-7b15-55c3-d4bd-a7f69a5e8cdd" Dec 12 18:29:43.432535 containerd[1664]: 2025-12-12 18:29:43.309 [INFO][4733] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" after=18.774093ms iface="eth0" netns="/var/run/netns/cni-3d9d53e9-7b15-55c3-d4bd-a7f69a5e8cdd" Dec 12 18:29:43.432535 containerd[1664]: 2025-12-12 18:29:43.309 [INFO][4733] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" Dec 12 18:29:43.432535 containerd[1664]: 2025-12-12 18:29:43.309 [INFO][4733] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" Dec 12 18:29:43.432535 containerd[1664]: 2025-12-12 18:29:43.360 [INFO][4743] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" HandleID="k8s-pod-network.dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" Workload="srv--vv1nl.gb1.brightbox.com-k8s-whisker--7d4f547dfc--vzz6v-eth0" Dec 12 18:29:43.432535 containerd[1664]: 2025-12-12 18:29:43.360 [INFO][4743] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:29:43.432535 containerd[1664]: 2025-12-12 18:29:43.360 [INFO][4743] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:29:43.437878 containerd[1664]: 2025-12-12 18:29:43.422 [INFO][4743] ipam/ipam_plugin.go 455: Released address using handleID ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" HandleID="k8s-pod-network.dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" Workload="srv--vv1nl.gb1.brightbox.com-k8s-whisker--7d4f547dfc--vzz6v-eth0" Dec 12 18:29:43.437878 containerd[1664]: 2025-12-12 18:29:43.423 [INFO][4743] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" HandleID="k8s-pod-network.dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" Workload="srv--vv1nl.gb1.brightbox.com-k8s-whisker--7d4f547dfc--vzz6v-eth0" Dec 12 18:29:43.437878 containerd[1664]: 2025-12-12 18:29:43.425 [INFO][4743] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:29:43.437878 containerd[1664]: 2025-12-12 18:29:43.428 [INFO][4733] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" Dec 12 18:29:43.437878 containerd[1664]: time="2025-12-12T18:29:43.432863963Z" level=info msg="TearDown network for sandbox \"dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452\" successfully" Dec 12 18:29:43.437878 containerd[1664]: time="2025-12-12T18:29:43.432970908Z" level=info msg="StopPodSandbox for \"dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452\" returns successfully" Dec 12 18:29:43.437920 systemd[1]: run-netns-cni\x2d3d9d53e9\x2d7b15\x2d55c3\x2dd4bd\x2da7f69a5e8cdd.mount: Deactivated successfully. Dec 12 18:29:43.487383 kubelet[2966]: I1212 18:29:43.487302 2966 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b-whisker-ca-bundle\") pod \"2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b\" (UID: \"2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b\") " Dec 12 18:29:43.487685 kubelet[2966]: I1212 18:29:43.487436 2966 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhzkl\" (UniqueName: \"kubernetes.io/projected/2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b-kube-api-access-lhzkl\") pod \"2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b\" (UID: \"2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b\") " Dec 12 18:29:43.487685 kubelet[2966]: I1212 18:29:43.487497 2966 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b-whisker-backend-key-pair\") pod \"2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b\" (UID: \"2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b\") " Dec 12 18:29:43.496109 kubelet[2966]: I1212 18:29:43.492740 2966 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b" (UID: "2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 18:29:43.505696 kubelet[2966]: I1212 18:29:43.505586 2966 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b" (UID: "2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 18:29:43.507385 systemd[1]: var-lib-kubelet-pods-2bd61bfe\x2d05dc\x2d4b61\x2dbc64\x2deee3d98d3f4b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlhzkl.mount: Deactivated successfully. Dec 12 18:29:43.513344 kubelet[2966]: I1212 18:29:43.513269 2966 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b-kube-api-access-lhzkl" (OuterVolumeSpecName: "kube-api-access-lhzkl") pod "2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b" (UID: "2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b"). InnerVolumeSpecName "kube-api-access-lhzkl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 18:29:43.515588 systemd[1]: var-lib-kubelet-pods-2bd61bfe\x2d05dc\x2d4b61\x2dbc64\x2deee3d98d3f4b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 18:29:43.593133 kubelet[2966]: I1212 18:29:43.592994 2966 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b-whisker-ca-bundle\") on node \"srv-vv1nl.gb1.brightbox.com\" DevicePath \"\"" Dec 12 18:29:43.593133 kubelet[2966]: I1212 18:29:43.593071 2966 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lhzkl\" (UniqueName: \"kubernetes.io/projected/2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b-kube-api-access-lhzkl\") on node \"srv-vv1nl.gb1.brightbox.com\" DevicePath \"\"" Dec 12 18:29:43.593133 kubelet[2966]: I1212 18:29:43.593093 2966 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b-whisker-backend-key-pair\") on node \"srv-vv1nl.gb1.brightbox.com\" DevicePath \"\"" Dec 12 18:29:44.051783 systemd[1]: Removed slice kubepods-besteffort-pod2bd61bfe_05dc_4b61_bc64_eee3d98d3f4b.slice - libcontainer container kubepods-besteffort-pod2bd61bfe_05dc_4b61_bc64_eee3d98d3f4b.slice. Dec 12 18:29:44.153000 audit[4758]: NETFILTER_CFG table=filter:133 family=2 entries=20 op=nft_register_rule pid=4758 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:44.153000 audit[4758]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff5f77b5d0 a2=0 a3=7fff5f77b5bc items=0 ppid=3085 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:44.153000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:44.259069 systemd[1]: Created slice kubepods-besteffort-podf480f83d_d2ca_423c_8dd5_8e1df9a9ca33.slice - libcontainer container kubepods-besteffort-podf480f83d_d2ca_423c_8dd5_8e1df9a9ca33.slice. Dec 12 18:29:44.284452 systemd-networkd[1573]: vxlan.calico: Gained IPv6LL Dec 12 18:29:44.305943 kubelet[2966]: I1212 18:29:44.304823 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv6kk\" (UniqueName: \"kubernetes.io/projected/f480f83d-d2ca-423c-8dd5-8e1df9a9ca33-kube-api-access-kv6kk\") pod \"whisker-6d77754785-tzmkt\" (UID: \"f480f83d-d2ca-423c-8dd5-8e1df9a9ca33\") " pod="calico-system/whisker-6d77754785-tzmkt" Dec 12 18:29:44.305943 kubelet[2966]: I1212 18:29:44.304904 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f480f83d-d2ca-423c-8dd5-8e1df9a9ca33-whisker-ca-bundle\") pod \"whisker-6d77754785-tzmkt\" (UID: \"f480f83d-d2ca-423c-8dd5-8e1df9a9ca33\") " pod="calico-system/whisker-6d77754785-tzmkt" Dec 12 18:29:44.305943 kubelet[2966]: I1212 18:29:44.305145 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f480f83d-d2ca-423c-8dd5-8e1df9a9ca33-whisker-backend-key-pair\") pod \"whisker-6d77754785-tzmkt\" (UID: \"f480f83d-d2ca-423c-8dd5-8e1df9a9ca33\") " pod="calico-system/whisker-6d77754785-tzmkt" Dec 12 18:29:44.312000 audit[4758]: NETFILTER_CFG table=nat:134 family=2 entries=14 op=nft_register_rule pid=4758 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:44.312000 audit[4758]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff5f77b5d0 a2=0 a3=0 items=0 ppid=3085 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:44.312000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:44.492815 containerd[1664]: time="2025-12-12T18:29:44.492752347Z" level=info msg="StopPodSandbox for \"dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452\"" Dec 12 18:29:44.536578 kubelet[2966]: I1212 18:29:44.536333 2966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b" path="/var/lib/kubelet/pods/2bd61bfe-05dc-4b61-bc64-eee3d98d3f4b/volumes" Dec 12 18:29:44.573699 containerd[1664]: time="2025-12-12T18:29:44.573028498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d77754785-tzmkt,Uid:f480f83d-d2ca-423c-8dd5-8e1df9a9ca33,Namespace:calico-system,Attempt:0,}" Dec 12 18:29:44.780693 containerd[1664]: 2025-12-12 18:29:44.640 [WARNING][4770] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-whisker--7d4f547dfc--vzz6v-eth0" Dec 12 18:29:44.780693 containerd[1664]: 2025-12-12 18:29:44.641 [INFO][4770] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" Dec 12 18:29:44.780693 containerd[1664]: 2025-12-12 18:29:44.643 [INFO][4770] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" iface="eth0" netns="" Dec 12 18:29:44.780693 containerd[1664]: 2025-12-12 18:29:44.643 [INFO][4770] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" Dec 12 18:29:44.780693 containerd[1664]: 2025-12-12 18:29:44.643 [INFO][4770] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" Dec 12 18:29:44.780693 containerd[1664]: 2025-12-12 18:29:44.746 [INFO][4788] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" HandleID="k8s-pod-network.dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" Workload="srv--vv1nl.gb1.brightbox.com-k8s-whisker--7d4f547dfc--vzz6v-eth0" Dec 12 18:29:44.780693 containerd[1664]: 2025-12-12 18:29:44.747 [INFO][4788] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:29:44.780693 containerd[1664]: 2025-12-12 18:29:44.747 [INFO][4788] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:29:44.780693 containerd[1664]: 2025-12-12 18:29:44.760 [WARNING][4788] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" HandleID="k8s-pod-network.dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" Workload="srv--vv1nl.gb1.brightbox.com-k8s-whisker--7d4f547dfc--vzz6v-eth0" Dec 12 18:29:44.783301 containerd[1664]: 2025-12-12 18:29:44.761 [INFO][4788] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" HandleID="k8s-pod-network.dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" Workload="srv--vv1nl.gb1.brightbox.com-k8s-whisker--7d4f547dfc--vzz6v-eth0" Dec 12 18:29:44.783301 containerd[1664]: 2025-12-12 18:29:44.764 [INFO][4788] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:29:44.783301 containerd[1664]: 2025-12-12 18:29:44.775 [INFO][4770] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" Dec 12 18:29:44.783301 containerd[1664]: time="2025-12-12T18:29:44.780755064Z" level=info msg="TearDown network for sandbox \"dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452\" successfully" Dec 12 18:29:44.783301 containerd[1664]: time="2025-12-12T18:29:44.780830693Z" level=info msg="StopPodSandbox for \"dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452\" returns successfully" Dec 12 18:29:44.784885 containerd[1664]: time="2025-12-12T18:29:44.784845443Z" level=info msg="RemovePodSandbox for \"dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452\"" Dec 12 18:29:44.794273 containerd[1664]: time="2025-12-12T18:29:44.794206592Z" level=info msg="Forcibly stopping sandbox \"dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452\"" Dec 12 18:29:44.899528 systemd-networkd[1573]: cali1d7f4183bb4: Link UP Dec 12 18:29:44.901720 systemd-networkd[1573]: cali1d7f4183bb4: Gained carrier Dec 12 18:29:44.940315 containerd[1664]: 2025-12-12 18:29:44.707 [INFO][4777] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--vv1nl.gb1.brightbox.com-k8s-whisker--6d77754785--tzmkt-eth0 whisker-6d77754785- calico-system f480f83d-d2ca-423c-8dd5-8e1df9a9ca33 1019 0 2025-12-12 18:29:44 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6d77754785 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-vv1nl.gb1.brightbox.com whisker-6d77754785-tzmkt eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1d7f4183bb4 [] [] }} ContainerID="5b81868a451810444102350ce4d6b121623ba0ba3716fa228330e22d5e4e9ffe" Namespace="calico-system" Pod="whisker-6d77754785-tzmkt" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-whisker--6d77754785--tzmkt-" Dec 12 18:29:44.940315 containerd[1664]: 2025-12-12 18:29:44.716 [INFO][4777] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5b81868a451810444102350ce4d6b121623ba0ba3716fa228330e22d5e4e9ffe" Namespace="calico-system" Pod="whisker-6d77754785-tzmkt" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-whisker--6d77754785--tzmkt-eth0" Dec 12 18:29:44.940315 containerd[1664]: 2025-12-12 18:29:44.804 [INFO][4795] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5b81868a451810444102350ce4d6b121623ba0ba3716fa228330e22d5e4e9ffe" HandleID="k8s-pod-network.5b81868a451810444102350ce4d6b121623ba0ba3716fa228330e22d5e4e9ffe" Workload="srv--vv1nl.gb1.brightbox.com-k8s-whisker--6d77754785--tzmkt-eth0" Dec 12 18:29:44.942734 containerd[1664]: 2025-12-12 18:29:44.806 [INFO][4795] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5b81868a451810444102350ce4d6b121623ba0ba3716fa228330e22d5e4e9ffe" HandleID="k8s-pod-network.5b81868a451810444102350ce4d6b121623ba0ba3716fa228330e22d5e4e9ffe" Workload="srv--vv1nl.gb1.brightbox.com-k8s-whisker--6d77754785--tzmkt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d59e0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-vv1nl.gb1.brightbox.com", "pod":"whisker-6d77754785-tzmkt", "timestamp":"2025-12-12 18:29:44.804991489 +0000 UTC"}, Hostname:"srv-vv1nl.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:29:44.942734 containerd[1664]: 2025-12-12 18:29:44.806 [INFO][4795] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:29:44.942734 containerd[1664]: 2025-12-12 18:29:44.806 [INFO][4795] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:29:44.942734 containerd[1664]: 2025-12-12 18:29:44.806 [INFO][4795] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-vv1nl.gb1.brightbox.com' Dec 12 18:29:44.942734 containerd[1664]: 2025-12-12 18:29:44.824 [INFO][4795] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5b81868a451810444102350ce4d6b121623ba0ba3716fa228330e22d5e4e9ffe" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:44.942734 containerd[1664]: 2025-12-12 18:29:44.834 [INFO][4795] ipam/ipam.go 394: Looking up existing affinities for host host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:44.942734 containerd[1664]: 2025-12-12 18:29:44.842 [INFO][4795] ipam/ipam.go 511: Trying affinity for 192.168.62.0/26 host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:44.942734 containerd[1664]: 2025-12-12 18:29:44.848 [INFO][4795] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.0/26 host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:44.942734 containerd[1664]: 2025-12-12 18:29:44.857 [INFO][4795] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.0/26 host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:44.944215 containerd[1664]: 2025-12-12 18:29:44.857 [INFO][4795] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.62.0/26 handle="k8s-pod-network.5b81868a451810444102350ce4d6b121623ba0ba3716fa228330e22d5e4e9ffe" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:44.944215 containerd[1664]: 2025-12-12 18:29:44.859 [INFO][4795] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5b81868a451810444102350ce4d6b121623ba0ba3716fa228330e22d5e4e9ffe Dec 12 18:29:44.944215 containerd[1664]: 2025-12-12 18:29:44.869 [INFO][4795] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.62.0/26 handle="k8s-pod-network.5b81868a451810444102350ce4d6b121623ba0ba3716fa228330e22d5e4e9ffe" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:44.944215 containerd[1664]: 2025-12-12 18:29:44.885 [INFO][4795] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.62.5/26] block=192.168.62.0/26 handle="k8s-pod-network.5b81868a451810444102350ce4d6b121623ba0ba3716fa228330e22d5e4e9ffe" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:44.944215 containerd[1664]: 2025-12-12 18:29:44.885 [INFO][4795] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.5/26] handle="k8s-pod-network.5b81868a451810444102350ce4d6b121623ba0ba3716fa228330e22d5e4e9ffe" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:44.944215 containerd[1664]: 2025-12-12 18:29:44.885 [INFO][4795] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:29:44.944215 containerd[1664]: 2025-12-12 18:29:44.886 [INFO][4795] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.62.5/26] IPv6=[] ContainerID="5b81868a451810444102350ce4d6b121623ba0ba3716fa228330e22d5e4e9ffe" HandleID="k8s-pod-network.5b81868a451810444102350ce4d6b121623ba0ba3716fa228330e22d5e4e9ffe" Workload="srv--vv1nl.gb1.brightbox.com-k8s-whisker--6d77754785--tzmkt-eth0" Dec 12 18:29:44.945015 containerd[1664]: 2025-12-12 18:29:44.892 [INFO][4777] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5b81868a451810444102350ce4d6b121623ba0ba3716fa228330e22d5e4e9ffe" Namespace="calico-system" Pod="whisker-6d77754785-tzmkt" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-whisker--6d77754785--tzmkt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vv1nl.gb1.brightbox.com-k8s-whisker--6d77754785--tzmkt-eth0", GenerateName:"whisker-6d77754785-", Namespace:"calico-system", SelfLink:"", UID:"f480f83d-d2ca-423c-8dd5-8e1df9a9ca33", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 29, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d77754785", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vv1nl.gb1.brightbox.com", ContainerID:"", Pod:"whisker-6d77754785-tzmkt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.62.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1d7f4183bb4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:29:44.945015 containerd[1664]: 2025-12-12 18:29:44.892 [INFO][4777] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.5/32] ContainerID="5b81868a451810444102350ce4d6b121623ba0ba3716fa228330e22d5e4e9ffe" Namespace="calico-system" Pod="whisker-6d77754785-tzmkt" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-whisker--6d77754785--tzmkt-eth0" Dec 12 18:29:44.945500 containerd[1664]: 2025-12-12 18:29:44.892 [INFO][4777] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1d7f4183bb4 ContainerID="5b81868a451810444102350ce4d6b121623ba0ba3716fa228330e22d5e4e9ffe" Namespace="calico-system" Pod="whisker-6d77754785-tzmkt" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-whisker--6d77754785--tzmkt-eth0" Dec 12 18:29:44.945500 containerd[1664]: 2025-12-12 18:29:44.905 [INFO][4777] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5b81868a451810444102350ce4d6b121623ba0ba3716fa228330e22d5e4e9ffe" Namespace="calico-system" Pod="whisker-6d77754785-tzmkt" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-whisker--6d77754785--tzmkt-eth0" Dec 12 18:29:44.945618 containerd[1664]: 2025-12-12 18:29:44.915 [INFO][4777] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5b81868a451810444102350ce4d6b121623ba0ba3716fa228330e22d5e4e9ffe" Namespace="calico-system" Pod="whisker-6d77754785-tzmkt" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-whisker--6d77754785--tzmkt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vv1nl.gb1.brightbox.com-k8s-whisker--6d77754785--tzmkt-eth0", GenerateName:"whisker-6d77754785-", Namespace:"calico-system", SelfLink:"", UID:"f480f83d-d2ca-423c-8dd5-8e1df9a9ca33", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 29, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d77754785", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vv1nl.gb1.brightbox.com", ContainerID:"5b81868a451810444102350ce4d6b121623ba0ba3716fa228330e22d5e4e9ffe", Pod:"whisker-6d77754785-tzmkt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.62.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1d7f4183bb4", MAC:"c6:57:65:15:7a:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:29:44.945748 containerd[1664]: 2025-12-12 18:29:44.934 [INFO][4777] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5b81868a451810444102350ce4d6b121623ba0ba3716fa228330e22d5e4e9ffe" Namespace="calico-system" Pod="whisker-6d77754785-tzmkt" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-whisker--6d77754785--tzmkt-eth0" Dec 12 18:29:44.996584 containerd[1664]: time="2025-12-12T18:29:44.995317004Z" level=info msg="connecting to shim 5b81868a451810444102350ce4d6b121623ba0ba3716fa228330e22d5e4e9ffe" address="unix:///run/containerd/s/64be222ee9f57643b0d174fba07ec1b0f4f701e5ab7c137797752439d75e8412" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:29:45.032613 containerd[1664]: 2025-12-12 18:29:44.888 [WARNING][4812] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-whisker--7d4f547dfc--vzz6v-eth0" Dec 12 18:29:45.032613 containerd[1664]: 2025-12-12 18:29:44.888 [INFO][4812] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" Dec 12 18:29:45.032613 containerd[1664]: 2025-12-12 18:29:44.888 [INFO][4812] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" iface="eth0" netns="" Dec 12 18:29:45.032613 containerd[1664]: 2025-12-12 18:29:44.889 [INFO][4812] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" Dec 12 18:29:45.032613 containerd[1664]: 2025-12-12 18:29:44.889 [INFO][4812] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" Dec 12 18:29:45.032613 containerd[1664]: 2025-12-12 18:29:44.976 [INFO][4821] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" HandleID="k8s-pod-network.dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" Workload="srv--vv1nl.gb1.brightbox.com-k8s-whisker--7d4f547dfc--vzz6v-eth0" Dec 12 18:29:45.032613 containerd[1664]: 2025-12-12 18:29:44.976 [INFO][4821] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:29:45.032613 containerd[1664]: 2025-12-12 18:29:44.976 [INFO][4821] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:29:45.032613 containerd[1664]: 2025-12-12 18:29:44.998 [WARNING][4821] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" HandleID="k8s-pod-network.dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" Workload="srv--vv1nl.gb1.brightbox.com-k8s-whisker--7d4f547dfc--vzz6v-eth0" Dec 12 18:29:45.033433 containerd[1664]: 2025-12-12 18:29:44.998 [INFO][4821] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" HandleID="k8s-pod-network.dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" Workload="srv--vv1nl.gb1.brightbox.com-k8s-whisker--7d4f547dfc--vzz6v-eth0" Dec 12 18:29:45.033433 containerd[1664]: 2025-12-12 18:29:45.025 [INFO][4821] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:29:45.033433 containerd[1664]: 2025-12-12 18:29:45.030 [INFO][4812] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452" Dec 12 18:29:45.033433 containerd[1664]: time="2025-12-12T18:29:45.032835657Z" level=info msg="TearDown network for sandbox \"dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452\" successfully" Dec 12 18:29:45.040508 containerd[1664]: time="2025-12-12T18:29:45.040215819Z" level=info msg="Ensure that sandbox dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452 in task-service has been cleanup successfully" Dec 12 18:29:45.046429 containerd[1664]: time="2025-12-12T18:29:45.046303197Z" level=info msg="RemovePodSandbox \"dbcc9bcd94dc2719cf28bdfb5545e9007960c88c67e3e1ad34e71a607f819452\" returns successfully" Dec 12 18:29:45.063678 systemd[1]: Started cri-containerd-5b81868a451810444102350ce4d6b121623ba0ba3716fa228330e22d5e4e9ffe.scope - libcontainer container 5b81868a451810444102350ce4d6b121623ba0ba3716fa228330e22d5e4e9ffe. Dec 12 18:29:45.097000 audit[4871]: NETFILTER_CFG table=filter:135 family=2 entries=65 op=nft_register_chain pid=4871 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:29:45.097000 audit[4871]: SYSCALL arch=c000003e syscall=46 success=yes exit=36448 a0=3 a1=7ffdcde3a600 a2=0 a3=7ffdcde3a5ec items=0 ppid=4490 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:45.097000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:29:45.116000 audit: BPF prog-id=230 op=LOAD Dec 12 18:29:45.118000 audit: BPF prog-id=231 op=LOAD Dec 12 18:29:45.118000 audit[4857]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4845 pid=4857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:45.118000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562383138363861343531383130343434313032333530636534643662 Dec 12 18:29:45.118000 audit: BPF prog-id=231 op=UNLOAD Dec 12 18:29:45.118000 audit[4857]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4845 pid=4857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:45.118000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562383138363861343531383130343434313032333530636534643662 Dec 12 18:29:45.118000 audit: BPF prog-id=232 op=LOAD Dec 12 18:29:45.118000 audit[4857]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4845 pid=4857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:45.118000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562383138363861343531383130343434313032333530636534643662 Dec 12 18:29:45.119000 audit: BPF prog-id=233 op=LOAD Dec 12 18:29:45.119000 audit[4857]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4845 pid=4857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:45.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562383138363861343531383130343434313032333530636534643662 Dec 12 18:29:45.119000 audit: BPF prog-id=233 op=UNLOAD Dec 12 18:29:45.119000 audit[4857]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4845 pid=4857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:45.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562383138363861343531383130343434313032333530636534643662 Dec 12 18:29:45.120000 audit: BPF prog-id=232 op=UNLOAD Dec 12 18:29:45.120000 audit[4857]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4845 pid=4857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:45.120000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562383138363861343531383130343434313032333530636534643662 Dec 12 18:29:45.121000 audit: BPF prog-id=234 op=LOAD Dec 12 18:29:45.121000 audit[4857]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4845 pid=4857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:45.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562383138363861343531383130343434313032333530636534643662 Dec 12 18:29:45.189086 containerd[1664]: time="2025-12-12T18:29:45.188932239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d77754785-tzmkt,Uid:f480f83d-d2ca-423c-8dd5-8e1df9a9ca33,Namespace:calico-system,Attempt:0,} returns sandbox id \"5b81868a451810444102350ce4d6b121623ba0ba3716fa228330e22d5e4e9ffe\"" Dec 12 18:29:45.194197 containerd[1664]: time="2025-12-12T18:29:45.193566517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:29:45.505955 containerd[1664]: time="2025-12-12T18:29:45.505895882Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:29:45.508129 containerd[1664]: time="2025-12-12T18:29:45.508084065Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:29:45.508267 containerd[1664]: time="2025-12-12T18:29:45.508222864Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 18:29:45.508642 kubelet[2966]: E1212 18:29:45.508550 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:29:45.509802 kubelet[2966]: E1212 18:29:45.508683 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:29:45.509802 kubelet[2966]: E1212 18:29:45.509662 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:54e2ece65d4e4e24bb0d14e858b9af4a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kv6kk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d77754785-tzmkt_calico-system(f480f83d-d2ca-423c-8dd5-8e1df9a9ca33): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:29:45.512177 containerd[1664]: time="2025-12-12T18:29:45.512121955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:29:45.832877 containerd[1664]: time="2025-12-12T18:29:45.832684631Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:29:45.833993 containerd[1664]: time="2025-12-12T18:29:45.833930031Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:29:45.834099 containerd[1664]: time="2025-12-12T18:29:45.834035905Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 18:29:45.834335 kubelet[2966]: E1212 18:29:45.834253 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:29:45.834335 kubelet[2966]: E1212 18:29:45.834325 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:29:45.834647 kubelet[2966]: E1212 18:29:45.834510 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kv6kk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d77754785-tzmkt_calico-system(f480f83d-d2ca-423c-8dd5-8e1df9a9ca33): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:29:45.836044 kubelet[2966]: E1212 18:29:45.835981 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d77754785-tzmkt" podUID="f480f83d-d2ca-423c-8dd5-8e1df9a9ca33" Dec 12 18:29:46.041553 kubelet[2966]: E1212 18:29:46.041487 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d77754785-tzmkt" podUID="f480f83d-d2ca-423c-8dd5-8e1df9a9ca33" Dec 12 18:29:46.105402 kernel: kauditd_printk_skb: 333 callbacks suppressed Dec 12 18:29:46.105708 kernel: audit: type=1325 audit(1765564186.095:699): table=filter:136 family=2 entries=20 op=nft_register_rule pid=4886 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:46.105799 kernel: audit: type=1300 audit(1765564186.095:699): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdbc4cfeb0 a2=0 a3=7ffdbc4cfe9c items=0 ppid=3085 pid=4886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:46.095000 audit[4886]: NETFILTER_CFG table=filter:136 family=2 entries=20 op=nft_register_rule pid=4886 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:46.095000 audit[4886]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdbc4cfeb0 a2=0 a3=7ffdbc4cfe9c items=0 ppid=3085 pid=4886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:46.095000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:46.124330 kernel: audit: type=1327 audit(1765564186.095:699): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:46.113000 audit[4886]: NETFILTER_CFG table=nat:137 family=2 entries=14 op=nft_register_rule pid=4886 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:46.113000 audit[4886]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdbc4cfeb0 a2=0 a3=0 items=0 ppid=3085 pid=4886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:46.129601 kernel: audit: type=1325 audit(1765564186.113:700): table=nat:137 family=2 entries=14 op=nft_register_rule pid=4886 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:46.129689 kernel: audit: type=1300 audit(1765564186.113:700): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdbc4cfeb0 a2=0 a3=0 items=0 ppid=3085 pid=4886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:46.113000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:46.134505 kernel: audit: type=1327 audit(1765564186.113:700): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:46.908545 systemd-networkd[1573]: cali1d7f4183bb4: Gained IPv6LL Dec 12 18:29:47.044420 kubelet[2966]: E1212 18:29:47.044302 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d77754785-tzmkt" podUID="f480f83d-d2ca-423c-8dd5-8e1df9a9ca33" Dec 12 18:29:50.497215 containerd[1664]: time="2025-12-12T18:29:50.496634868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84669789bf-xx9l4,Uid:1d148975-a25d-444b-b4af-f56c99e82a44,Namespace:calico-system,Attempt:0,}" Dec 12 18:29:50.497991 containerd[1664]: time="2025-12-12T18:29:50.496892764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-557c4cb5b5-v5k7t,Uid:dcf3ef86-77cd-46ff-befa-f79857cd4570,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:29:50.497991 containerd[1664]: time="2025-12-12T18:29:50.497007504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lsgz8,Uid:6315f58f-593a-4458-a6a7-81537b517426,Namespace:kube-system,Attempt:0,}" Dec 12 18:29:50.800088 systemd-networkd[1573]: cali4f832b2a7ec: Link UP Dec 12 18:29:50.802621 systemd-networkd[1573]: cali4f832b2a7ec: Gained carrier Dec 12 18:29:50.833697 containerd[1664]: 2025-12-12 18:29:50.646 [INFO][4898] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--vv1nl.gb1.brightbox.com-k8s-calico--apiserver--557c4cb5b5--v5k7t-eth0 calico-apiserver-557c4cb5b5- calico-apiserver dcf3ef86-77cd-46ff-befa-f79857cd4570 846 0 2025-12-12 18:29:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:557c4cb5b5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-vv1nl.gb1.brightbox.com calico-apiserver-557c4cb5b5-v5k7t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4f832b2a7ec [] [] }} ContainerID="9ce24c0b30e8fde74928947419c33c0fd98cb2bef1731cb378ab5c84d45cafba" Namespace="calico-apiserver" Pod="calico-apiserver-557c4cb5b5-v5k7t" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-calico--apiserver--557c4cb5b5--v5k7t-" Dec 12 18:29:50.833697 containerd[1664]: 2025-12-12 18:29:50.647 [INFO][4898] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9ce24c0b30e8fde74928947419c33c0fd98cb2bef1731cb378ab5c84d45cafba" Namespace="calico-apiserver" Pod="calico-apiserver-557c4cb5b5-v5k7t" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-calico--apiserver--557c4cb5b5--v5k7t-eth0" Dec 12 18:29:50.833697 containerd[1664]: 2025-12-12 18:29:50.710 [INFO][4944] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ce24c0b30e8fde74928947419c33c0fd98cb2bef1731cb378ab5c84d45cafba" HandleID="k8s-pod-network.9ce24c0b30e8fde74928947419c33c0fd98cb2bef1731cb378ab5c84d45cafba" Workload="srv--vv1nl.gb1.brightbox.com-k8s-calico--apiserver--557c4cb5b5--v5k7t-eth0" Dec 12 18:29:50.834034 containerd[1664]: 2025-12-12 18:29:50.710 [INFO][4944] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9ce24c0b30e8fde74928947419c33c0fd98cb2bef1731cb378ab5c84d45cafba" HandleID="k8s-pod-network.9ce24c0b30e8fde74928947419c33c0fd98cb2bef1731cb378ab5c84d45cafba" Workload="srv--vv1nl.gb1.brightbox.com-k8s-calico--apiserver--557c4cb5b5--v5k7t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5700), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-vv1nl.gb1.brightbox.com", "pod":"calico-apiserver-557c4cb5b5-v5k7t", "timestamp":"2025-12-12 18:29:50.710010333 +0000 UTC"}, Hostname:"srv-vv1nl.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:29:50.834034 containerd[1664]: 2025-12-12 18:29:50.710 [INFO][4944] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:29:50.834034 containerd[1664]: 2025-12-12 18:29:50.711 [INFO][4944] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:29:50.834034 containerd[1664]: 2025-12-12 18:29:50.711 [INFO][4944] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-vv1nl.gb1.brightbox.com' Dec 12 18:29:50.834034 containerd[1664]: 2025-12-12 18:29:50.732 [INFO][4944] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9ce24c0b30e8fde74928947419c33c0fd98cb2bef1731cb378ab5c84d45cafba" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:50.834034 containerd[1664]: 2025-12-12 18:29:50.742 [INFO][4944] ipam/ipam.go 394: Looking up existing affinities for host host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:50.834034 containerd[1664]: 2025-12-12 18:29:50.750 [INFO][4944] ipam/ipam.go 511: Trying affinity for 192.168.62.0/26 host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:50.834034 containerd[1664]: 2025-12-12 18:29:50.753 [INFO][4944] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.0/26 host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:50.834034 containerd[1664]: 2025-12-12 18:29:50.757 [INFO][4944] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.0/26 host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:50.835768 containerd[1664]: 2025-12-12 18:29:50.757 [INFO][4944] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.62.0/26 handle="k8s-pod-network.9ce24c0b30e8fde74928947419c33c0fd98cb2bef1731cb378ab5c84d45cafba" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:50.835768 containerd[1664]: 2025-12-12 18:29:50.760 [INFO][4944] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9ce24c0b30e8fde74928947419c33c0fd98cb2bef1731cb378ab5c84d45cafba Dec 12 18:29:50.835768 containerd[1664]: 2025-12-12 18:29:50.774 [INFO][4944] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.62.0/26 handle="k8s-pod-network.9ce24c0b30e8fde74928947419c33c0fd98cb2bef1731cb378ab5c84d45cafba" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:50.835768 containerd[1664]: 2025-12-12 18:29:50.784 [INFO][4944] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.62.6/26] block=192.168.62.0/26 handle="k8s-pod-network.9ce24c0b30e8fde74928947419c33c0fd98cb2bef1731cb378ab5c84d45cafba" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:50.835768 containerd[1664]: 2025-12-12 18:29:50.784 [INFO][4944] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.6/26] handle="k8s-pod-network.9ce24c0b30e8fde74928947419c33c0fd98cb2bef1731cb378ab5c84d45cafba" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:50.835768 containerd[1664]: 2025-12-12 18:29:50.785 [INFO][4944] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:29:50.835768 containerd[1664]: 2025-12-12 18:29:50.785 [INFO][4944] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.62.6/26] IPv6=[] ContainerID="9ce24c0b30e8fde74928947419c33c0fd98cb2bef1731cb378ab5c84d45cafba" HandleID="k8s-pod-network.9ce24c0b30e8fde74928947419c33c0fd98cb2bef1731cb378ab5c84d45cafba" Workload="srv--vv1nl.gb1.brightbox.com-k8s-calico--apiserver--557c4cb5b5--v5k7t-eth0" Dec 12 18:29:50.836068 containerd[1664]: 2025-12-12 18:29:50.790 [INFO][4898] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9ce24c0b30e8fde74928947419c33c0fd98cb2bef1731cb378ab5c84d45cafba" Namespace="calico-apiserver" Pod="calico-apiserver-557c4cb5b5-v5k7t" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-calico--apiserver--557c4cb5b5--v5k7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vv1nl.gb1.brightbox.com-k8s-calico--apiserver--557c4cb5b5--v5k7t-eth0", GenerateName:"calico-apiserver-557c4cb5b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"dcf3ef86-77cd-46ff-befa-f79857cd4570", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 29, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"557c4cb5b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vv1nl.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-557c4cb5b5-v5k7t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f832b2a7ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:29:50.837047 containerd[1664]: 2025-12-12 18:29:50.791 [INFO][4898] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.6/32] ContainerID="9ce24c0b30e8fde74928947419c33c0fd98cb2bef1731cb378ab5c84d45cafba" Namespace="calico-apiserver" Pod="calico-apiserver-557c4cb5b5-v5k7t" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-calico--apiserver--557c4cb5b5--v5k7t-eth0" Dec 12 18:29:50.837047 containerd[1664]: 2025-12-12 18:29:50.791 [INFO][4898] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f832b2a7ec ContainerID="9ce24c0b30e8fde74928947419c33c0fd98cb2bef1731cb378ab5c84d45cafba" Namespace="calico-apiserver" Pod="calico-apiserver-557c4cb5b5-v5k7t" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-calico--apiserver--557c4cb5b5--v5k7t-eth0" Dec 12 18:29:50.837047 containerd[1664]: 2025-12-12 18:29:50.803 [INFO][4898] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9ce24c0b30e8fde74928947419c33c0fd98cb2bef1731cb378ab5c84d45cafba" Namespace="calico-apiserver" Pod="calico-apiserver-557c4cb5b5-v5k7t" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-calico--apiserver--557c4cb5b5--v5k7t-eth0" Dec 12 18:29:50.837242 containerd[1664]: 2025-12-12 18:29:50.805 [INFO][4898] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9ce24c0b30e8fde74928947419c33c0fd98cb2bef1731cb378ab5c84d45cafba" Namespace="calico-apiserver" Pod="calico-apiserver-557c4cb5b5-v5k7t" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-calico--apiserver--557c4cb5b5--v5k7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vv1nl.gb1.brightbox.com-k8s-calico--apiserver--557c4cb5b5--v5k7t-eth0", GenerateName:"calico-apiserver-557c4cb5b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"dcf3ef86-77cd-46ff-befa-f79857cd4570", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 29, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"557c4cb5b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vv1nl.gb1.brightbox.com", ContainerID:"9ce24c0b30e8fde74928947419c33c0fd98cb2bef1731cb378ab5c84d45cafba", Pod:"calico-apiserver-557c4cb5b5-v5k7t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f832b2a7ec", MAC:"26:18:8f:f0:cd:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:29:50.837377 containerd[1664]: 2025-12-12 18:29:50.825 [INFO][4898] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9ce24c0b30e8fde74928947419c33c0fd98cb2bef1731cb378ab5c84d45cafba" Namespace="calico-apiserver" Pod="calico-apiserver-557c4cb5b5-v5k7t" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-calico--apiserver--557c4cb5b5--v5k7t-eth0" Dec 12 18:29:50.888000 audit[4968]: NETFILTER_CFG table=filter:138 family=2 entries=49 op=nft_register_chain pid=4968 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:29:50.900478 kernel: audit: type=1325 audit(1765564190.888:701): table=filter:138 family=2 entries=49 op=nft_register_chain pid=4968 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:29:50.888000 audit[4968]: SYSCALL arch=c000003e syscall=46 success=yes exit=25452 a0=3 a1=7ffd75754b80 a2=0 a3=7ffd75754b6c items=0 ppid=4490 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:50.907237 kernel: audit: type=1300 audit(1765564190.888:701): arch=c000003e syscall=46 success=yes exit=25452 a0=3 a1=7ffd75754b80 a2=0 a3=7ffd75754b6c items=0 ppid=4490 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:50.888000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:29:50.912286 kernel: audit: type=1327 audit(1765564190.888:701): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:29:50.962312 systemd-networkd[1573]: calie8d50ad6222: Link UP Dec 12 18:29:50.964343 systemd-networkd[1573]: calie8d50ad6222: Gained carrier Dec 12 18:29:50.978527 containerd[1664]: time="2025-12-12T18:29:50.978451314Z" level=info msg="connecting to shim 9ce24c0b30e8fde74928947419c33c0fd98cb2bef1731cb378ab5c84d45cafba" address="unix:///run/containerd/s/37d889e6d13ed8e05a2b9d3885b0d939670c37f94dddd6b352ffc2ad31f4e9f2" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:29:51.027399 containerd[1664]: 2025-12-12 18:29:50.611 [INFO][4900] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--vv1nl.gb1.brightbox.com-k8s-calico--kube--controllers--84669789bf--xx9l4-eth0 calico-kube-controllers-84669789bf- calico-system 1d148975-a25d-444b-b4af-f56c99e82a44 845 0 2025-12-12 18:29:08 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:84669789bf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-vv1nl.gb1.brightbox.com calico-kube-controllers-84669789bf-xx9l4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie8d50ad6222 [] [] }} ContainerID="00857058ec934c4af5c181a7cebd7b9dcc65a72559c60e482c999b47eb800f8c" Namespace="calico-system" Pod="calico-kube-controllers-84669789bf-xx9l4" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-calico--kube--controllers--84669789bf--xx9l4-" Dec 12 18:29:51.027399 containerd[1664]: 2025-12-12 18:29:50.613 [INFO][4900] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="00857058ec934c4af5c181a7cebd7b9dcc65a72559c60e482c999b47eb800f8c" Namespace="calico-system" Pod="calico-kube-controllers-84669789bf-xx9l4" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-calico--kube--controllers--84669789bf--xx9l4-eth0" Dec 12 18:29:51.027399 containerd[1664]: 2025-12-12 18:29:50.739 [INFO][4937] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="00857058ec934c4af5c181a7cebd7b9dcc65a72559c60e482c999b47eb800f8c" HandleID="k8s-pod-network.00857058ec934c4af5c181a7cebd7b9dcc65a72559c60e482c999b47eb800f8c" Workload="srv--vv1nl.gb1.brightbox.com-k8s-calico--kube--controllers--84669789bf--xx9l4-eth0" Dec 12 18:29:51.027760 containerd[1664]: 2025-12-12 18:29:50.739 [INFO][4937] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="00857058ec934c4af5c181a7cebd7b9dcc65a72559c60e482c999b47eb800f8c" HandleID="k8s-pod-network.00857058ec934c4af5c181a7cebd7b9dcc65a72559c60e482c999b47eb800f8c" Workload="srv--vv1nl.gb1.brightbox.com-k8s-calico--kube--controllers--84669789bf--xx9l4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000125500), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-vv1nl.gb1.brightbox.com", "pod":"calico-kube-controllers-84669789bf-xx9l4", "timestamp":"2025-12-12 18:29:50.739530204 +0000 UTC"}, Hostname:"srv-vv1nl.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:29:51.027760 containerd[1664]: 2025-12-12 18:29:50.739 [INFO][4937] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:29:51.027760 containerd[1664]: 2025-12-12 18:29:50.784 [INFO][4937] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:29:51.027760 containerd[1664]: 2025-12-12 18:29:50.784 [INFO][4937] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-vv1nl.gb1.brightbox.com' Dec 12 18:29:51.027760 containerd[1664]: 2025-12-12 18:29:50.837 [INFO][4937] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.00857058ec934c4af5c181a7cebd7b9dcc65a72559c60e482c999b47eb800f8c" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:51.027760 containerd[1664]: 2025-12-12 18:29:50.848 [INFO][4937] ipam/ipam.go 394: Looking up existing affinities for host host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:51.027760 containerd[1664]: 2025-12-12 18:29:50.860 [INFO][4937] ipam/ipam.go 511: Trying affinity for 192.168.62.0/26 host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:51.027760 containerd[1664]: 2025-12-12 18:29:50.865 [INFO][4937] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.0/26 host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:51.027760 containerd[1664]: 2025-12-12 18:29:50.871 [INFO][4937] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.0/26 host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:51.029542 containerd[1664]: 2025-12-12 18:29:50.871 [INFO][4937] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.62.0/26 handle="k8s-pod-network.00857058ec934c4af5c181a7cebd7b9dcc65a72559c60e482c999b47eb800f8c" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:51.029542 containerd[1664]: 2025-12-12 18:29:50.875 [INFO][4937] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.00857058ec934c4af5c181a7cebd7b9dcc65a72559c60e482c999b47eb800f8c Dec 12 18:29:51.029542 containerd[1664]: 2025-12-12 18:29:50.886 [INFO][4937] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.62.0/26 handle="k8s-pod-network.00857058ec934c4af5c181a7cebd7b9dcc65a72559c60e482c999b47eb800f8c" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:51.029542 containerd[1664]: 2025-12-12 18:29:50.932 [INFO][4937] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.62.7/26] block=192.168.62.0/26 handle="k8s-pod-network.00857058ec934c4af5c181a7cebd7b9dcc65a72559c60e482c999b47eb800f8c" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:51.029542 containerd[1664]: 2025-12-12 18:29:50.932 [INFO][4937] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.7/26] handle="k8s-pod-network.00857058ec934c4af5c181a7cebd7b9dcc65a72559c60e482c999b47eb800f8c" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:51.029542 containerd[1664]: 2025-12-12 18:29:50.934 [INFO][4937] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:29:51.029542 containerd[1664]: 2025-12-12 18:29:50.934 [INFO][4937] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.62.7/26] IPv6=[] ContainerID="00857058ec934c4af5c181a7cebd7b9dcc65a72559c60e482c999b47eb800f8c" HandleID="k8s-pod-network.00857058ec934c4af5c181a7cebd7b9dcc65a72559c60e482c999b47eb800f8c" Workload="srv--vv1nl.gb1.brightbox.com-k8s-calico--kube--controllers--84669789bf--xx9l4-eth0" Dec 12 18:29:51.031040 containerd[1664]: 2025-12-12 18:29:50.946 [INFO][4900] cni-plugin/k8s.go 418: Populated endpoint ContainerID="00857058ec934c4af5c181a7cebd7b9dcc65a72559c60e482c999b47eb800f8c" Namespace="calico-system" Pod="calico-kube-controllers-84669789bf-xx9l4" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-calico--kube--controllers--84669789bf--xx9l4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vv1nl.gb1.brightbox.com-k8s-calico--kube--controllers--84669789bf--xx9l4-eth0", GenerateName:"calico-kube-controllers-84669789bf-", Namespace:"calico-system", SelfLink:"", UID:"1d148975-a25d-444b-b4af-f56c99e82a44", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 29, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84669789bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vv1nl.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-84669789bf-xx9l4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie8d50ad6222", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:29:51.031236 containerd[1664]: 2025-12-12 18:29:50.948 [INFO][4900] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.7/32] ContainerID="00857058ec934c4af5c181a7cebd7b9dcc65a72559c60e482c999b47eb800f8c" Namespace="calico-system" Pod="calico-kube-controllers-84669789bf-xx9l4" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-calico--kube--controllers--84669789bf--xx9l4-eth0" Dec 12 18:29:51.031236 containerd[1664]: 2025-12-12 18:29:50.949 [INFO][4900] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie8d50ad6222 ContainerID="00857058ec934c4af5c181a7cebd7b9dcc65a72559c60e482c999b47eb800f8c" Namespace="calico-system" Pod="calico-kube-controllers-84669789bf-xx9l4" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-calico--kube--controllers--84669789bf--xx9l4-eth0" Dec 12 18:29:51.031236 containerd[1664]: 2025-12-12 18:29:50.966 [INFO][4900] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="00857058ec934c4af5c181a7cebd7b9dcc65a72559c60e482c999b47eb800f8c" Namespace="calico-system" Pod="calico-kube-controllers-84669789bf-xx9l4" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-calico--kube--controllers--84669789bf--xx9l4-eth0" Dec 12 18:29:51.031441 containerd[1664]: 2025-12-12 18:29:50.968 [INFO][4900] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="00857058ec934c4af5c181a7cebd7b9dcc65a72559c60e482c999b47eb800f8c" Namespace="calico-system" Pod="calico-kube-controllers-84669789bf-xx9l4" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-calico--kube--controllers--84669789bf--xx9l4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vv1nl.gb1.brightbox.com-k8s-calico--kube--controllers--84669789bf--xx9l4-eth0", GenerateName:"calico-kube-controllers-84669789bf-", Namespace:"calico-system", SelfLink:"", UID:"1d148975-a25d-444b-b4af-f56c99e82a44", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 29, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84669789bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vv1nl.gb1.brightbox.com", ContainerID:"00857058ec934c4af5c181a7cebd7b9dcc65a72559c60e482c999b47eb800f8c", Pod:"calico-kube-controllers-84669789bf-xx9l4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie8d50ad6222", MAC:"ce:e7:13:df:09:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:29:51.031566 containerd[1664]: 2025-12-12 18:29:51.004 [INFO][4900] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="00857058ec934c4af5c181a7cebd7b9dcc65a72559c60e482c999b47eb800f8c" Namespace="calico-system" Pod="calico-kube-controllers-84669789bf-xx9l4" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-calico--kube--controllers--84669789bf--xx9l4-eth0" Dec 12 18:29:51.094031 systemd-networkd[1573]: cali1cf793aabef: Link UP Dec 12 18:29:51.097591 systemd[1]: Started cri-containerd-9ce24c0b30e8fde74928947419c33c0fd98cb2bef1731cb378ab5c84d45cafba.scope - libcontainer container 9ce24c0b30e8fde74928947419c33c0fd98cb2bef1731cb378ab5c84d45cafba. Dec 12 18:29:51.100466 systemd-networkd[1573]: cali1cf793aabef: Gained carrier Dec 12 18:29:51.134446 containerd[1664]: time="2025-12-12T18:29:51.134221928Z" level=info msg="connecting to shim 00857058ec934c4af5c181a7cebd7b9dcc65a72559c60e482c999b47eb800f8c" address="unix:///run/containerd/s/e7459f7de4ab591d02b29a182ac373333756eee6b7a8700afc624fbda1eb3db0" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:29:51.140939 containerd[1664]: 2025-12-12 18:29:50.643 [INFO][4913] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--vv1nl.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lsgz8-eth0 coredns-668d6bf9bc- kube-system 6315f58f-593a-4458-a6a7-81537b517426 837 0 2025-12-12 18:28:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-vv1nl.gb1.brightbox.com coredns-668d6bf9bc-lsgz8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1cf793aabef [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="26d60229199bc51d0f104a27c3a86331376f275e4e7e0b2ce8b4dc47a38d3608" Namespace="kube-system" Pod="coredns-668d6bf9bc-lsgz8" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lsgz8-" Dec 12 18:29:51.140939 containerd[1664]: 2025-12-12 18:29:50.643 [INFO][4913] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="26d60229199bc51d0f104a27c3a86331376f275e4e7e0b2ce8b4dc47a38d3608" Namespace="kube-system" Pod="coredns-668d6bf9bc-lsgz8" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lsgz8-eth0" Dec 12 18:29:51.140939 containerd[1664]: 2025-12-12 18:29:50.748 [INFO][4942] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="26d60229199bc51d0f104a27c3a86331376f275e4e7e0b2ce8b4dc47a38d3608" HandleID="k8s-pod-network.26d60229199bc51d0f104a27c3a86331376f275e4e7e0b2ce8b4dc47a38d3608" Workload="srv--vv1nl.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lsgz8-eth0" Dec 12 18:29:51.141260 containerd[1664]: 2025-12-12 18:29:50.748 [INFO][4942] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="26d60229199bc51d0f104a27c3a86331376f275e4e7e0b2ce8b4dc47a38d3608" HandleID="k8s-pod-network.26d60229199bc51d0f104a27c3a86331376f275e4e7e0b2ce8b4dc47a38d3608" Workload="srv--vv1nl.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lsgz8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00028b880), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-vv1nl.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-lsgz8", "timestamp":"2025-12-12 18:29:50.748377157 +0000 UTC"}, Hostname:"srv-vv1nl.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:29:51.141260 containerd[1664]: 2025-12-12 18:29:50.748 [INFO][4942] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:29:51.141260 containerd[1664]: 2025-12-12 18:29:50.933 [INFO][4942] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:29:51.141260 containerd[1664]: 2025-12-12 18:29:50.933 [INFO][4942] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-vv1nl.gb1.brightbox.com' Dec 12 18:29:51.141260 containerd[1664]: 2025-12-12 18:29:50.958 [INFO][4942] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.26d60229199bc51d0f104a27c3a86331376f275e4e7e0b2ce8b4dc47a38d3608" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:51.141260 containerd[1664]: 2025-12-12 18:29:50.979 [INFO][4942] ipam/ipam.go 394: Looking up existing affinities for host host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:51.141260 containerd[1664]: 2025-12-12 18:29:51.025 [INFO][4942] ipam/ipam.go 511: Trying affinity for 192.168.62.0/26 host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:51.141260 containerd[1664]: 2025-12-12 18:29:51.033 [INFO][4942] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.0/26 host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:51.141260 containerd[1664]: 2025-12-12 18:29:51.037 [INFO][4942] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.0/26 host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:51.141643 containerd[1664]: 2025-12-12 18:29:51.037 [INFO][4942] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.62.0/26 handle="k8s-pod-network.26d60229199bc51d0f104a27c3a86331376f275e4e7e0b2ce8b4dc47a38d3608" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:51.141643 containerd[1664]: 2025-12-12 18:29:51.043 [INFO][4942] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.26d60229199bc51d0f104a27c3a86331376f275e4e7e0b2ce8b4dc47a38d3608 Dec 12 18:29:51.141643 containerd[1664]: 2025-12-12 18:29:51.051 [INFO][4942] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.62.0/26 handle="k8s-pod-network.26d60229199bc51d0f104a27c3a86331376f275e4e7e0b2ce8b4dc47a38d3608" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:51.141643 containerd[1664]: 2025-12-12 18:29:51.073 [INFO][4942] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.62.8/26] block=192.168.62.0/26 handle="k8s-pod-network.26d60229199bc51d0f104a27c3a86331376f275e4e7e0b2ce8b4dc47a38d3608" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:51.141643 containerd[1664]: 2025-12-12 18:29:51.073 [INFO][4942] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.8/26] handle="k8s-pod-network.26d60229199bc51d0f104a27c3a86331376f275e4e7e0b2ce8b4dc47a38d3608" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:51.141643 containerd[1664]: 2025-12-12 18:29:51.073 [INFO][4942] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:29:51.141643 containerd[1664]: 2025-12-12 18:29:51.073 [INFO][4942] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.62.8/26] IPv6=[] ContainerID="26d60229199bc51d0f104a27c3a86331376f275e4e7e0b2ce8b4dc47a38d3608" HandleID="k8s-pod-network.26d60229199bc51d0f104a27c3a86331376f275e4e7e0b2ce8b4dc47a38d3608" Workload="srv--vv1nl.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lsgz8-eth0" Dec 12 18:29:51.141917 containerd[1664]: 2025-12-12 18:29:51.083 [INFO][4913] cni-plugin/k8s.go 418: Populated endpoint ContainerID="26d60229199bc51d0f104a27c3a86331376f275e4e7e0b2ce8b4dc47a38d3608" Namespace="kube-system" Pod="coredns-668d6bf9bc-lsgz8" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lsgz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vv1nl.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lsgz8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6315f58f-593a-4458-a6a7-81537b517426", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 28, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vv1nl.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-lsgz8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1cf793aabef", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:29:51.141917 containerd[1664]: 2025-12-12 18:29:51.083 [INFO][4913] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.8/32] ContainerID="26d60229199bc51d0f104a27c3a86331376f275e4e7e0b2ce8b4dc47a38d3608" Namespace="kube-system" Pod="coredns-668d6bf9bc-lsgz8" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lsgz8-eth0" Dec 12 18:29:51.141917 containerd[1664]: 2025-12-12 18:29:51.083 [INFO][4913] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1cf793aabef ContainerID="26d60229199bc51d0f104a27c3a86331376f275e4e7e0b2ce8b4dc47a38d3608" Namespace="kube-system" Pod="coredns-668d6bf9bc-lsgz8" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lsgz8-eth0" Dec 12 18:29:51.141917 containerd[1664]: 2025-12-12 18:29:51.108 [INFO][4913] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="26d60229199bc51d0f104a27c3a86331376f275e4e7e0b2ce8b4dc47a38d3608" Namespace="kube-system" Pod="coredns-668d6bf9bc-lsgz8" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lsgz8-eth0" Dec 12 18:29:51.141917 containerd[1664]: 2025-12-12 18:29:51.109 [INFO][4913] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="26d60229199bc51d0f104a27c3a86331376f275e4e7e0b2ce8b4dc47a38d3608" Namespace="kube-system" Pod="coredns-668d6bf9bc-lsgz8" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lsgz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vv1nl.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lsgz8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6315f58f-593a-4458-a6a7-81537b517426", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 28, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vv1nl.gb1.brightbox.com", ContainerID:"26d60229199bc51d0f104a27c3a86331376f275e4e7e0b2ce8b4dc47a38d3608", Pod:"coredns-668d6bf9bc-lsgz8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1cf793aabef", MAC:"8a:85:b4:20:bd:77", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:29:51.141917 containerd[1664]: 2025-12-12 18:29:51.128 [INFO][4913] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="26d60229199bc51d0f104a27c3a86331376f275e4e7e0b2ce8b4dc47a38d3608" Namespace="kube-system" Pod="coredns-668d6bf9bc-lsgz8" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lsgz8-eth0" Dec 12 18:29:51.166311 kernel: audit: type=1325 audit(1765564191.139:702): table=filter:139 family=2 entries=52 op=nft_register_chain pid=5008 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:29:51.139000 audit[5008]: NETFILTER_CFG table=filter:139 family=2 entries=52 op=nft_register_chain pid=5008 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:29:51.139000 audit[5008]: SYSCALL arch=c000003e syscall=46 success=yes exit=24328 a0=3 a1=7ffd607f9620 a2=0 a3=7ffd607f960c items=0 ppid=4490 pid=5008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:51.174182 kernel: audit: type=1300 audit(1765564191.139:702): arch=c000003e syscall=46 success=yes exit=24328 a0=3 a1=7ffd607f9620 a2=0 a3=7ffd607f960c items=0 ppid=4490 pid=5008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:51.139000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:29:51.179197 kernel: audit: type=1327 audit(1765564191.139:702): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:29:51.214539 systemd[1]: Started cri-containerd-00857058ec934c4af5c181a7cebd7b9dcc65a72559c60e482c999b47eb800f8c.scope - libcontainer container 00857058ec934c4af5c181a7cebd7b9dcc65a72559c60e482c999b47eb800f8c. Dec 12 18:29:51.227217 kernel: audit: type=1334 audit(1765564191.224:703): prog-id=235 op=LOAD Dec 12 18:29:51.224000 audit: BPF prog-id=235 op=LOAD Dec 12 18:29:51.230303 kernel: audit: type=1334 audit(1765564191.226:704): prog-id=236 op=LOAD Dec 12 18:29:51.226000 audit: BPF prog-id=236 op=LOAD Dec 12 18:29:51.226000 audit[4990]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4979 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:51.237365 kernel: audit: type=1300 audit(1765564191.226:704): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4979 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:51.226000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963653234633062333065386664653734393238393437343139633333 Dec 12 18:29:51.245188 kernel: audit: type=1327 audit(1765564191.226:704): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963653234633062333065386664653734393238393437343139633333 Dec 12 18:29:51.246345 containerd[1664]: time="2025-12-12T18:29:51.245586207Z" level=info msg="connecting to shim 26d60229199bc51d0f104a27c3a86331376f275e4e7e0b2ce8b4dc47a38d3608" address="unix:///run/containerd/s/9abf0ef66323a0bde246e03fc9320e8da2a9f2ddd69017582b9d45036aea5d21" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:29:51.227000 audit: BPF prog-id=236 op=UNLOAD Dec 12 18:29:51.255422 kernel: audit: type=1334 audit(1765564191.227:705): prog-id=236 op=UNLOAD Dec 12 18:29:51.255584 kernel: audit: type=1300 audit(1765564191.227:705): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4979 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:51.227000 audit[4990]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4979 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:51.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963653234633062333065386664653734393238393437343139633333 Dec 12 18:29:51.263380 kernel: audit: type=1327 audit(1765564191.227:705): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963653234633062333065386664653734393238393437343139633333 Dec 12 18:29:51.227000 audit: BPF prog-id=237 op=LOAD Dec 12 18:29:51.227000 audit[4990]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4979 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:51.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963653234633062333065386664653734393238393437343139633333 Dec 12 18:29:51.227000 audit: BPF prog-id=238 op=LOAD Dec 12 18:29:51.227000 audit[4990]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4979 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:51.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963653234633062333065386664653734393238393437343139633333 Dec 12 18:29:51.227000 audit: BPF prog-id=238 op=UNLOAD Dec 12 18:29:51.227000 audit[4990]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4979 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:51.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963653234633062333065386664653734393238393437343139633333 Dec 12 18:29:51.227000 audit: BPF prog-id=237 op=UNLOAD Dec 12 18:29:51.227000 audit[4990]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4979 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:51.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963653234633062333065386664653734393238393437343139633333 Dec 12 18:29:51.227000 audit: BPF prog-id=239 op=LOAD Dec 12 18:29:51.227000 audit[4990]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4979 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:51.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963653234633062333065386664653734393238393437343139633333 Dec 12 18:29:51.319743 systemd[1]: Started cri-containerd-26d60229199bc51d0f104a27c3a86331376f275e4e7e0b2ce8b4dc47a38d3608.scope - libcontainer container 26d60229199bc51d0f104a27c3a86331376f275e4e7e0b2ce8b4dc47a38d3608. Dec 12 18:29:51.322000 audit: BPF prog-id=240 op=LOAD Dec 12 18:29:51.324000 audit: BPF prog-id=241 op=LOAD Dec 12 18:29:51.324000 audit[5039]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5019 pid=5039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:51.324000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030383537303538656339333463346166356331383161376365626437 Dec 12 18:29:51.324000 audit: BPF prog-id=241 op=UNLOAD Dec 12 18:29:51.324000 audit[5039]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5019 pid=5039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:51.324000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030383537303538656339333463346166356331383161376365626437 Dec 12 18:29:51.324000 audit: BPF prog-id=242 op=LOAD Dec 12 18:29:51.324000 audit[5039]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5019 pid=5039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:51.324000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030383537303538656339333463346166356331383161376365626437 Dec 12 18:29:51.324000 audit: BPF prog-id=243 op=LOAD Dec 12 18:29:51.324000 audit[5039]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5019 pid=5039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:51.324000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030383537303538656339333463346166356331383161376365626437 Dec 12 18:29:51.324000 audit: BPF prog-id=243 op=UNLOAD Dec 12 18:29:51.324000 audit[5039]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5019 pid=5039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:51.324000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030383537303538656339333463346166356331383161376365626437 Dec 12 18:29:51.325000 audit: BPF prog-id=242 op=UNLOAD Dec 12 18:29:51.325000 audit[5039]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5019 pid=5039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:51.325000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030383537303538656339333463346166356331383161376365626437 Dec 12 18:29:51.325000 audit: BPF prog-id=244 op=LOAD Dec 12 18:29:51.325000 audit[5039]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5019 pid=5039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:51.325000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030383537303538656339333463346166356331383161376365626437 Dec 12 18:29:51.350000 audit[5098]: NETFILTER_CFG table=filter:140 family=2 entries=68 op=nft_register_chain pid=5098 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:29:51.350000 audit[5098]: SYSCALL arch=c000003e syscall=46 success=yes exit=31344 a0=3 a1=7fff628608d0 a2=0 a3=7fff628608bc items=0 ppid=4490 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:51.350000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:29:51.366000 audit: BPF prog-id=245 op=LOAD Dec 12 18:29:51.368000 audit: BPF prog-id=246 op=LOAD Dec 12 18:29:51.368000 audit[5084]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=5069 pid=5084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:51.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236643630323239313939626335316430663130346132376333613836 Dec 12 18:29:51.368000 audit: BPF prog-id=246 op=UNLOAD Dec 12 18:29:51.368000 audit[5084]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5069 pid=5084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:51.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236643630323239313939626335316430663130346132376333613836 Dec 12 18:29:51.368000 audit: BPF prog-id=247 op=LOAD Dec 12 18:29:51.368000 audit[5084]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=5069 pid=5084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:51.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236643630323239313939626335316430663130346132376333613836 Dec 12 18:29:51.370000 audit: BPF prog-id=248 op=LOAD Dec 12 18:29:51.370000 audit[5084]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=5069 pid=5084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:51.370000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236643630323239313939626335316430663130346132376333613836 Dec 12 18:29:51.371000 audit: BPF prog-id=248 op=UNLOAD Dec 12 18:29:51.371000 audit[5084]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5069 pid=5084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:51.374139 containerd[1664]: time="2025-12-12T18:29:51.373790254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-557c4cb5b5-v5k7t,Uid:dcf3ef86-77cd-46ff-befa-f79857cd4570,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9ce24c0b30e8fde74928947419c33c0fd98cb2bef1731cb378ab5c84d45cafba\"" Dec 12 18:29:51.371000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236643630323239313939626335316430663130346132376333613836 Dec 12 18:29:51.373000 audit: BPF prog-id=247 op=UNLOAD Dec 12 18:29:51.373000 audit[5084]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5069 pid=5084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:51.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236643630323239313939626335316430663130346132376333613836 Dec 12 18:29:51.374000 audit: BPF prog-id=249 op=LOAD Dec 12 18:29:51.374000 audit[5084]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=5069 pid=5084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:51.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236643630323239313939626335316430663130346132376333613836 Dec 12 18:29:51.379234 containerd[1664]: time="2025-12-12T18:29:51.378690588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:29:51.438373 containerd[1664]: time="2025-12-12T18:29:51.438309287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lsgz8,Uid:6315f58f-593a-4458-a6a7-81537b517426,Namespace:kube-system,Attempt:0,} returns sandbox id \"26d60229199bc51d0f104a27c3a86331376f275e4e7e0b2ce8b4dc47a38d3608\"" Dec 12 18:29:51.447492 containerd[1664]: time="2025-12-12T18:29:51.447418498Z" level=info msg="CreateContainer within sandbox \"26d60229199bc51d0f104a27c3a86331376f275e4e7e0b2ce8b4dc47a38d3608\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:29:51.470950 containerd[1664]: time="2025-12-12T18:29:51.470517596Z" level=info msg="Container 4be0e5ba4696ee18cfa72c351c11692074ecedcc890bf706a8b2a19a5a29d80f: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:29:51.477722 containerd[1664]: time="2025-12-12T18:29:51.477684349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84669789bf-xx9l4,Uid:1d148975-a25d-444b-b4af-f56c99e82a44,Namespace:calico-system,Attempt:0,} returns sandbox id \"00857058ec934c4af5c181a7cebd7b9dcc65a72559c60e482c999b47eb800f8c\"" Dec 12 18:29:51.478902 containerd[1664]: time="2025-12-12T18:29:51.478749334Z" level=info msg="CreateContainer within sandbox \"26d60229199bc51d0f104a27c3a86331376f275e4e7e0b2ce8b4dc47a38d3608\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4be0e5ba4696ee18cfa72c351c11692074ecedcc890bf706a8b2a19a5a29d80f\"" Dec 12 18:29:51.480834 containerd[1664]: time="2025-12-12T18:29:51.480683923Z" level=info msg="StartContainer for \"4be0e5ba4696ee18cfa72c351c11692074ecedcc890bf706a8b2a19a5a29d80f\"" Dec 12 18:29:51.484782 containerd[1664]: time="2025-12-12T18:29:51.484734531Z" level=info msg="connecting to shim 4be0e5ba4696ee18cfa72c351c11692074ecedcc890bf706a8b2a19a5a29d80f" address="unix:///run/containerd/s/9abf0ef66323a0bde246e03fc9320e8da2a9f2ddd69017582b9d45036aea5d21" protocol=ttrpc version=3 Dec 12 18:29:51.528508 systemd[1]: Started cri-containerd-4be0e5ba4696ee18cfa72c351c11692074ecedcc890bf706a8b2a19a5a29d80f.scope - libcontainer container 4be0e5ba4696ee18cfa72c351c11692074ecedcc890bf706a8b2a19a5a29d80f. Dec 12 18:29:51.554000 audit: BPF prog-id=250 op=LOAD Dec 12 18:29:51.555000 audit: BPF prog-id=251 op=LOAD Dec 12 18:29:51.555000 audit[5127]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=5069 pid=5127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:51.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462653065356261343639366565313863666137326333353163313136 Dec 12 18:29:51.555000 audit: BPF prog-id=251 op=UNLOAD Dec 12 18:29:51.555000 audit[5127]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5069 pid=5127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:51.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462653065356261343639366565313863666137326333353163313136 Dec 12 18:29:51.555000 audit: BPF prog-id=252 op=LOAD Dec 12 18:29:51.555000 audit[5127]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=5069 pid=5127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:51.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462653065356261343639366565313863666137326333353163313136 Dec 12 18:29:51.555000 audit: BPF prog-id=253 op=LOAD Dec 12 18:29:51.555000 audit[5127]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=5069 pid=5127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:51.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462653065356261343639366565313863666137326333353163313136 Dec 12 18:29:51.555000 audit: BPF prog-id=253 op=UNLOAD Dec 12 18:29:51.555000 audit[5127]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5069 pid=5127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:51.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462653065356261343639366565313863666137326333353163313136 Dec 12 18:29:51.555000 audit: BPF prog-id=252 op=UNLOAD Dec 12 18:29:51.555000 audit[5127]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5069 pid=5127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:51.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462653065356261343639366565313863666137326333353163313136 Dec 12 18:29:51.555000 audit: BPF prog-id=254 op=LOAD Dec 12 18:29:51.555000 audit[5127]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=5069 pid=5127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:51.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462653065356261343639366565313863666137326333353163313136 Dec 12 18:29:51.593378 containerd[1664]: time="2025-12-12T18:29:51.593325847Z" level=info msg="StartContainer for \"4be0e5ba4696ee18cfa72c351c11692074ecedcc890bf706a8b2a19a5a29d80f\" returns successfully" Dec 12 18:29:51.691569 containerd[1664]: time="2025-12-12T18:29:51.691375362Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:29:51.693143 containerd[1664]: time="2025-12-12T18:29:51.693075531Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:29:51.693750 containerd[1664]: time="2025-12-12T18:29:51.693685327Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:29:51.694526 kubelet[2966]: E1212 18:29:51.694215 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:29:51.694526 kubelet[2966]: E1212 18:29:51.694309 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:29:51.695139 kubelet[2966]: E1212 18:29:51.694672 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6szg9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-557c4cb5b5-v5k7t_calico-apiserver(dcf3ef86-77cd-46ff-befa-f79857cd4570): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:29:51.697482 kubelet[2966]: E1212 18:29:51.696635 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-557c4cb5b5-v5k7t" podUID="dcf3ef86-77cd-46ff-befa-f79857cd4570" Dec 12 18:29:51.697601 containerd[1664]: time="2025-12-12T18:29:51.696942265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:29:52.008491 containerd[1664]: time="2025-12-12T18:29:52.008352985Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:29:52.009570 containerd[1664]: time="2025-12-12T18:29:52.009521098Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:29:52.009745 containerd[1664]: time="2025-12-12T18:29:52.009541783Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 18:29:52.009883 kubelet[2966]: E1212 18:29:52.009828 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:29:52.009999 kubelet[2966]: E1212 18:29:52.009927 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:29:52.016273 kubelet[2966]: E1212 18:29:52.016119 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9pjtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-84669789bf-xx9l4_calico-system(1d148975-a25d-444b-b4af-f56c99e82a44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:29:52.017505 kubelet[2966]: E1212 18:29:52.017415 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84669789bf-xx9l4" podUID="1d148975-a25d-444b-b4af-f56c99e82a44" Dec 12 18:29:52.063389 kubelet[2966]: E1212 18:29:52.063155 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84669789bf-xx9l4" podUID="1d148975-a25d-444b-b4af-f56c99e82a44" Dec 12 18:29:52.069078 kubelet[2966]: E1212 18:29:52.069013 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-557c4cb5b5-v5k7t" podUID="dcf3ef86-77cd-46ff-befa-f79857cd4570" Dec 12 18:29:52.125189 kubelet[2966]: I1212 18:29:52.122418 2966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lsgz8" podStartSLOduration=63.114636586 podStartE2EDuration="1m3.114636586s" podCreationTimestamp="2025-12-12 18:28:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:29:52.112685214 +0000 UTC m=+67.844502944" watchObservedRunningTime="2025-12-12 18:29:52.114636586 +0000 UTC m=+67.846454315" Dec 12 18:29:52.162000 audit[5163]: NETFILTER_CFG table=filter:141 family=2 entries=17 op=nft_register_rule pid=5163 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:52.162000 audit[5163]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffca38fc970 a2=0 a3=7ffca38fc95c items=0 ppid=3085 pid=5163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:52.162000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:52.169000 audit[5163]: NETFILTER_CFG table=nat:142 family=2 entries=35 op=nft_register_chain pid=5163 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:52.169000 audit[5163]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffca38fc970 a2=0 a3=7ffca38fc95c items=0 ppid=3085 pid=5163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:52.169000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:52.200000 audit[5165]: NETFILTER_CFG table=filter:143 family=2 entries=14 op=nft_register_rule pid=5165 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:52.200000 audit[5165]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffebea431f0 a2=0 a3=7ffebea431dc items=0 ppid=3085 pid=5165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:52.200000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:52.206000 audit[5165]: NETFILTER_CFG table=nat:144 family=2 entries=20 op=nft_register_rule pid=5165 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:52.206000 audit[5165]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffebea431f0 a2=0 a3=7ffebea431dc items=0 ppid=3085 pid=5165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:52.206000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:52.496211 containerd[1664]: time="2025-12-12T18:29:52.496130730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-grwhm,Uid:a11bd055-cf72-461b-9c3e-d8c8d4421dd2,Namespace:kube-system,Attempt:0,}" Dec 12 18:29:52.541977 systemd-networkd[1573]: calie8d50ad6222: Gained IPv6LL Dec 12 18:29:52.690215 systemd-networkd[1573]: cali7b6376c6571: Link UP Dec 12 18:29:52.690604 systemd-networkd[1573]: cali7b6376c6571: Gained carrier Dec 12 18:29:52.712554 containerd[1664]: 2025-12-12 18:29:52.568 [INFO][5169] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--vv1nl.gb1.brightbox.com-k8s-coredns--668d6bf9bc--grwhm-eth0 coredns-668d6bf9bc- kube-system a11bd055-cf72-461b-9c3e-d8c8d4421dd2 847 0 2025-12-12 18:28:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-vv1nl.gb1.brightbox.com coredns-668d6bf9bc-grwhm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7b6376c6571 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d8466deec3ca64f661ed994552328170c115ca8d6bfb11dde55e8152cd11c83c" Namespace="kube-system" Pod="coredns-668d6bf9bc-grwhm" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-coredns--668d6bf9bc--grwhm-" Dec 12 18:29:52.712554 containerd[1664]: 2025-12-12 18:29:52.569 [INFO][5169] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d8466deec3ca64f661ed994552328170c115ca8d6bfb11dde55e8152cd11c83c" Namespace="kube-system" Pod="coredns-668d6bf9bc-grwhm" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-coredns--668d6bf9bc--grwhm-eth0" Dec 12 18:29:52.712554 containerd[1664]: 2025-12-12 18:29:52.625 [INFO][5178] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d8466deec3ca64f661ed994552328170c115ca8d6bfb11dde55e8152cd11c83c" HandleID="k8s-pod-network.d8466deec3ca64f661ed994552328170c115ca8d6bfb11dde55e8152cd11c83c" Workload="srv--vv1nl.gb1.brightbox.com-k8s-coredns--668d6bf9bc--grwhm-eth0" Dec 12 18:29:52.712554 containerd[1664]: 2025-12-12 18:29:52.625 [INFO][5178] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d8466deec3ca64f661ed994552328170c115ca8d6bfb11dde55e8152cd11c83c" HandleID="k8s-pod-network.d8466deec3ca64f661ed994552328170c115ca8d6bfb11dde55e8152cd11c83c" Workload="srv--vv1nl.gb1.brightbox.com-k8s-coredns--668d6bf9bc--grwhm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000309670), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-vv1nl.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-grwhm", "timestamp":"2025-12-12 18:29:52.625267134 +0000 UTC"}, Hostname:"srv-vv1nl.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:29:52.712554 containerd[1664]: 2025-12-12 18:29:52.625 [INFO][5178] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:29:52.712554 containerd[1664]: 2025-12-12 18:29:52.625 [INFO][5178] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:29:52.712554 containerd[1664]: 2025-12-12 18:29:52.625 [INFO][5178] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-vv1nl.gb1.brightbox.com' Dec 12 18:29:52.712554 containerd[1664]: 2025-12-12 18:29:52.638 [INFO][5178] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d8466deec3ca64f661ed994552328170c115ca8d6bfb11dde55e8152cd11c83c" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:52.712554 containerd[1664]: 2025-12-12 18:29:52.646 [INFO][5178] ipam/ipam.go 394: Looking up existing affinities for host host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:52.712554 containerd[1664]: 2025-12-12 18:29:52.654 [INFO][5178] ipam/ipam.go 511: Trying affinity for 192.168.62.0/26 host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:52.712554 containerd[1664]: 2025-12-12 18:29:52.657 [INFO][5178] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.0/26 host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:52.712554 containerd[1664]: 2025-12-12 18:29:52.661 [INFO][5178] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.0/26 host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:52.712554 containerd[1664]: 2025-12-12 18:29:52.661 [INFO][5178] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.62.0/26 handle="k8s-pod-network.d8466deec3ca64f661ed994552328170c115ca8d6bfb11dde55e8152cd11c83c" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:52.712554 containerd[1664]: 2025-12-12 18:29:52.663 [INFO][5178] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d8466deec3ca64f661ed994552328170c115ca8d6bfb11dde55e8152cd11c83c Dec 12 18:29:52.712554 containerd[1664]: 2025-12-12 18:29:52.668 [INFO][5178] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.62.0/26 handle="k8s-pod-network.d8466deec3ca64f661ed994552328170c115ca8d6bfb11dde55e8152cd11c83c" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:52.712554 containerd[1664]: 2025-12-12 18:29:52.680 [INFO][5178] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.62.9/26] block=192.168.62.0/26 handle="k8s-pod-network.d8466deec3ca64f661ed994552328170c115ca8d6bfb11dde55e8152cd11c83c" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:52.712554 containerd[1664]: 2025-12-12 18:29:52.680 [INFO][5178] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.9/26] handle="k8s-pod-network.d8466deec3ca64f661ed994552328170c115ca8d6bfb11dde55e8152cd11c83c" host="srv-vv1nl.gb1.brightbox.com" Dec 12 18:29:52.712554 containerd[1664]: 2025-12-12 18:29:52.680 [INFO][5178] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:29:52.712554 containerd[1664]: 2025-12-12 18:29:52.680 [INFO][5178] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.62.9/26] IPv6=[] ContainerID="d8466deec3ca64f661ed994552328170c115ca8d6bfb11dde55e8152cd11c83c" HandleID="k8s-pod-network.d8466deec3ca64f661ed994552328170c115ca8d6bfb11dde55e8152cd11c83c" Workload="srv--vv1nl.gb1.brightbox.com-k8s-coredns--668d6bf9bc--grwhm-eth0" Dec 12 18:29:52.716510 containerd[1664]: 2025-12-12 18:29:52.684 [INFO][5169] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d8466deec3ca64f661ed994552328170c115ca8d6bfb11dde55e8152cd11c83c" Namespace="kube-system" Pod="coredns-668d6bf9bc-grwhm" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-coredns--668d6bf9bc--grwhm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vv1nl.gb1.brightbox.com-k8s-coredns--668d6bf9bc--grwhm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a11bd055-cf72-461b-9c3e-d8c8d4421dd2", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 28, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vv1nl.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-grwhm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b6376c6571", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:29:52.716510 containerd[1664]: 2025-12-12 18:29:52.684 [INFO][5169] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.9/32] ContainerID="d8466deec3ca64f661ed994552328170c115ca8d6bfb11dde55e8152cd11c83c" Namespace="kube-system" Pod="coredns-668d6bf9bc-grwhm" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-coredns--668d6bf9bc--grwhm-eth0" Dec 12 18:29:52.716510 containerd[1664]: 2025-12-12 18:29:52.685 [INFO][5169] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b6376c6571 ContainerID="d8466deec3ca64f661ed994552328170c115ca8d6bfb11dde55e8152cd11c83c" Namespace="kube-system" Pod="coredns-668d6bf9bc-grwhm" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-coredns--668d6bf9bc--grwhm-eth0" Dec 12 18:29:52.716510 containerd[1664]: 2025-12-12 18:29:52.692 [INFO][5169] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d8466deec3ca64f661ed994552328170c115ca8d6bfb11dde55e8152cd11c83c" Namespace="kube-system" Pod="coredns-668d6bf9bc-grwhm" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-coredns--668d6bf9bc--grwhm-eth0" Dec 12 18:29:52.716510 containerd[1664]: 2025-12-12 18:29:52.692 [INFO][5169] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d8466deec3ca64f661ed994552328170c115ca8d6bfb11dde55e8152cd11c83c" Namespace="kube-system" Pod="coredns-668d6bf9bc-grwhm" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-coredns--668d6bf9bc--grwhm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vv1nl.gb1.brightbox.com-k8s-coredns--668d6bf9bc--grwhm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a11bd055-cf72-461b-9c3e-d8c8d4421dd2", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 28, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vv1nl.gb1.brightbox.com", ContainerID:"d8466deec3ca64f661ed994552328170c115ca8d6bfb11dde55e8152cd11c83c", Pod:"coredns-668d6bf9bc-grwhm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b6376c6571", MAC:"2a:4b:65:c7:bf:ba", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:29:52.716510 containerd[1664]: 2025-12-12 18:29:52.708 [INFO][5169] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d8466deec3ca64f661ed994552328170c115ca8d6bfb11dde55e8152cd11c83c" Namespace="kube-system" Pod="coredns-668d6bf9bc-grwhm" WorkloadEndpoint="srv--vv1nl.gb1.brightbox.com-k8s-coredns--668d6bf9bc--grwhm-eth0" Dec 12 18:29:52.732740 systemd-networkd[1573]: cali1cf793aabef: Gained IPv6LL Dec 12 18:29:52.762816 containerd[1664]: time="2025-12-12T18:29:52.762400532Z" level=info msg="connecting to shim d8466deec3ca64f661ed994552328170c115ca8d6bfb11dde55e8152cd11c83c" address="unix:///run/containerd/s/b0e2aa2b0ca0d3965b4dee397849a071afce3beb67bf08b36f81319db1820eaf" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:29:52.767000 audit[5206]: NETFILTER_CFG table=filter:145 family=2 entries=58 op=nft_register_chain pid=5206 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:29:52.767000 audit[5206]: SYSCALL arch=c000003e syscall=46 success=yes exit=26744 a0=3 a1=7fffa8b7e830 a2=0 a3=7fffa8b7e81c items=0 ppid=4490 pid=5206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:52.767000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:29:52.858473 systemd[1]: Started cri-containerd-d8466deec3ca64f661ed994552328170c115ca8d6bfb11dde55e8152cd11c83c.scope - libcontainer container d8466deec3ca64f661ed994552328170c115ca8d6bfb11dde55e8152cd11c83c. Dec 12 18:29:52.860388 systemd-networkd[1573]: cali4f832b2a7ec: Gained IPv6LL Dec 12 18:29:52.889000 audit: BPF prog-id=255 op=LOAD Dec 12 18:29:52.890000 audit: BPF prog-id=256 op=LOAD Dec 12 18:29:52.890000 audit[5218]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=5205 pid=5218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:52.890000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438343636646565633363613634663636316564393934353532333238 Dec 12 18:29:52.890000 audit: BPF prog-id=256 op=UNLOAD Dec 12 18:29:52.890000 audit[5218]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5205 pid=5218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:52.890000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438343636646565633363613634663636316564393934353532333238 Dec 12 18:29:52.890000 audit: BPF prog-id=257 op=LOAD Dec 12 18:29:52.890000 audit[5218]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=5205 pid=5218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:52.890000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438343636646565633363613634663636316564393934353532333238 Dec 12 18:29:52.891000 audit: BPF prog-id=258 op=LOAD Dec 12 18:29:52.891000 audit[5218]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=5205 pid=5218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:52.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438343636646565633363613634663636316564393934353532333238 Dec 12 18:29:52.891000 audit: BPF prog-id=258 op=UNLOAD Dec 12 18:29:52.891000 audit[5218]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5205 pid=5218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:52.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438343636646565633363613634663636316564393934353532333238 Dec 12 18:29:52.891000 audit: BPF prog-id=257 op=UNLOAD Dec 12 18:29:52.891000 audit[5218]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5205 pid=5218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:52.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438343636646565633363613634663636316564393934353532333238 Dec 12 18:29:52.892000 audit: BPF prog-id=259 op=LOAD Dec 12 18:29:52.892000 audit[5218]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=5205 pid=5218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:52.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438343636646565633363613634663636316564393934353532333238 Dec 12 18:29:52.970458 containerd[1664]: time="2025-12-12T18:29:52.970264941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-grwhm,Uid:a11bd055-cf72-461b-9c3e-d8c8d4421dd2,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8466deec3ca64f661ed994552328170c115ca8d6bfb11dde55e8152cd11c83c\"" Dec 12 18:29:52.978308 containerd[1664]: time="2025-12-12T18:29:52.978175743Z" level=info msg="CreateContainer within sandbox \"d8466deec3ca64f661ed994552328170c115ca8d6bfb11dde55e8152cd11c83c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:29:52.994196 containerd[1664]: time="2025-12-12T18:29:52.990680922Z" level=info msg="Container e6894bd3536349276a0533481dfaef06ca130b0ec023526f2ef40bd355f47009: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:29:53.004503 containerd[1664]: time="2025-12-12T18:29:53.004425319Z" level=info msg="CreateContainer within sandbox \"d8466deec3ca64f661ed994552328170c115ca8d6bfb11dde55e8152cd11c83c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e6894bd3536349276a0533481dfaef06ca130b0ec023526f2ef40bd355f47009\"" Dec 12 18:29:53.005961 containerd[1664]: time="2025-12-12T18:29:53.005918630Z" level=info msg="StartContainer for \"e6894bd3536349276a0533481dfaef06ca130b0ec023526f2ef40bd355f47009\"" Dec 12 18:29:53.009985 containerd[1664]: time="2025-12-12T18:29:53.009930000Z" level=info msg="connecting to shim e6894bd3536349276a0533481dfaef06ca130b0ec023526f2ef40bd355f47009" address="unix:///run/containerd/s/b0e2aa2b0ca0d3965b4dee397849a071afce3beb67bf08b36f81319db1820eaf" protocol=ttrpc version=3 Dec 12 18:29:53.049621 systemd[1]: Started cri-containerd-e6894bd3536349276a0533481dfaef06ca130b0ec023526f2ef40bd355f47009.scope - libcontainer container e6894bd3536349276a0533481dfaef06ca130b0ec023526f2ef40bd355f47009. Dec 12 18:29:53.074000 audit: BPF prog-id=260 op=LOAD Dec 12 18:29:53.077000 audit: BPF prog-id=261 op=LOAD Dec 12 18:29:53.077000 audit[5249]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5205 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:53.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536383934626433353336333439323736613035333334383164666165 Dec 12 18:29:53.078000 audit: BPF prog-id=261 op=UNLOAD Dec 12 18:29:53.078000 audit[5249]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5205 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:53.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536383934626433353336333439323736613035333334383164666165 Dec 12 18:29:53.078000 audit: BPF prog-id=262 op=LOAD Dec 12 18:29:53.078000 audit[5249]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5205 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:53.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536383934626433353336333439323736613035333334383164666165 Dec 12 18:29:53.079000 audit: BPF prog-id=263 op=LOAD Dec 12 18:29:53.079000 audit[5249]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5205 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:53.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536383934626433353336333439323736613035333334383164666165 Dec 12 18:29:53.079000 audit: BPF prog-id=263 op=UNLOAD Dec 12 18:29:53.079000 audit[5249]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5205 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:53.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536383934626433353336333439323736613035333334383164666165 Dec 12 18:29:53.079000 audit: BPF prog-id=262 op=UNLOAD Dec 12 18:29:53.079000 audit[5249]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5205 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:53.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536383934626433353336333439323736613035333334383164666165 Dec 12 18:29:53.079000 audit: BPF prog-id=264 op=LOAD Dec 12 18:29:53.079000 audit[5249]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5205 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:53.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536383934626433353336333439323736613035333334383164666165 Dec 12 18:29:53.085830 kubelet[2966]: E1212 18:29:53.085506 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84669789bf-xx9l4" podUID="1d148975-a25d-444b-b4af-f56c99e82a44" Dec 12 18:29:53.087777 kubelet[2966]: E1212 18:29:53.087722 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-557c4cb5b5-v5k7t" podUID="dcf3ef86-77cd-46ff-befa-f79857cd4570" Dec 12 18:29:53.137923 containerd[1664]: time="2025-12-12T18:29:53.136273792Z" level=info msg="StartContainer for \"e6894bd3536349276a0533481dfaef06ca130b0ec023526f2ef40bd355f47009\" returns successfully" Dec 12 18:29:54.111425 kubelet[2966]: I1212 18:29:54.111276 2966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-grwhm" podStartSLOduration=65.111241554 podStartE2EDuration="1m5.111241554s" podCreationTimestamp="2025-12-12 18:28:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:29:54.109419553 +0000 UTC m=+69.841237300" watchObservedRunningTime="2025-12-12 18:29:54.111241554 +0000 UTC m=+69.843059275" Dec 12 18:29:54.145000 audit[5284]: NETFILTER_CFG table=filter:146 family=2 entries=14 op=nft_register_rule pid=5284 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:54.145000 audit[5284]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcd756f410 a2=0 a3=7ffcd756f3fc items=0 ppid=3085 pid=5284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:54.145000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:54.152000 audit[5284]: NETFILTER_CFG table=nat:147 family=2 entries=44 op=nft_register_rule pid=5284 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:54.152000 audit[5284]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffcd756f410 a2=0 a3=7ffcd756f3fc items=0 ppid=3085 pid=5284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:54.152000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:54.460482 systemd-networkd[1573]: cali7b6376c6571: Gained IPv6LL Dec 12 18:29:54.498790 containerd[1664]: time="2025-12-12T18:29:54.498638466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:29:54.818642 containerd[1664]: time="2025-12-12T18:29:54.818531946Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:29:54.820358 containerd[1664]: time="2025-12-12T18:29:54.820264188Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:29:54.820523 containerd[1664]: time="2025-12-12T18:29:54.820456833Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 18:29:54.821034 kubelet[2966]: E1212 18:29:54.820960 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:29:54.821385 kubelet[2966]: E1212 18:29:54.821258 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:29:54.821789 kubelet[2966]: E1212 18:29:54.821692 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mw84z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-cwqst_calico-system(1566f111-6981-4bf6-b05d-69ebc0c0ffaa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:29:54.823089 kubelet[2966]: E1212 18:29:54.823003 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwqst" podUID="1566f111-6981-4bf6-b05d-69ebc0c0ffaa" Dec 12 18:29:55.146000 audit[5286]: NETFILTER_CFG table=filter:148 family=2 entries=14 op=nft_register_rule pid=5286 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:55.146000 audit[5286]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd4810c9e0 a2=0 a3=7ffd4810c9cc items=0 ppid=3085 pid=5286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:55.146000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:55.178000 audit[5286]: NETFILTER_CFG table=nat:149 family=2 entries=56 op=nft_register_chain pid=5286 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:29:55.178000 audit[5286]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffd4810c9e0 a2=0 a3=7ffd4810c9cc items=0 ppid=3085 pid=5286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:29:55.178000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:29:55.497405 containerd[1664]: time="2025-12-12T18:29:55.497300930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:29:55.848679 containerd[1664]: time="2025-12-12T18:29:55.848466091Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:29:55.850523 containerd[1664]: time="2025-12-12T18:29:55.850463718Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:29:55.850696 containerd[1664]: time="2025-12-12T18:29:55.850661648Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:29:55.851053 kubelet[2966]: E1212 18:29:55.850987 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:29:55.851530 kubelet[2966]: E1212 18:29:55.851070 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:29:55.851530 kubelet[2966]: E1212 18:29:55.851329 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jqr7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-557c4cb5b5-4c5jw_calico-apiserver(d20e72ff-82d7-435d-b1ff-5952de8d6823): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:29:55.853316 kubelet[2966]: E1212 18:29:55.853227 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-557c4cb5b5-4c5jw" podUID="d20e72ff-82d7-435d-b1ff-5952de8d6823" Dec 12 18:29:57.497864 containerd[1664]: time="2025-12-12T18:29:57.497735589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:29:57.833964 containerd[1664]: time="2025-12-12T18:29:57.833692913Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:29:57.835764 containerd[1664]: time="2025-12-12T18:29:57.835654788Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:29:57.835987 containerd[1664]: time="2025-12-12T18:29:57.835789469Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 18:29:57.836349 kubelet[2966]: E1212 18:29:57.836296 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:29:57.837153 kubelet[2966]: E1212 18:29:57.836860 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:29:57.837153 kubelet[2966]: E1212 18:29:57.837082 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j9hcc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mzv84_calico-system(30dcceea-b67a-4ecb-b6c6-16baeb5ae67c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:29:57.840495 containerd[1664]: time="2025-12-12T18:29:57.840440239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:29:58.150775 containerd[1664]: time="2025-12-12T18:29:58.150586836Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:29:58.152655 containerd[1664]: time="2025-12-12T18:29:58.152459962Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 18:29:58.152655 containerd[1664]: time="2025-12-12T18:29:58.152462102Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:29:58.153035 kubelet[2966]: E1212 18:29:58.152981 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:29:58.153155 kubelet[2966]: E1212 18:29:58.153054 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:29:58.153432 kubelet[2966]: E1212 18:29:58.153285 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j9hcc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mzv84_calico-system(30dcceea-b67a-4ecb-b6c6-16baeb5ae67c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:29:58.154607 kubelet[2966]: E1212 18:29:58.154530 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzv84" podUID="30dcceea-b67a-4ecb-b6c6-16baeb5ae67c" Dec 12 18:29:59.497445 containerd[1664]: time="2025-12-12T18:29:59.497371337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:29:59.823311 containerd[1664]: time="2025-12-12T18:29:59.822925909Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:29:59.825117 containerd[1664]: time="2025-12-12T18:29:59.824986281Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:29:59.825117 containerd[1664]: time="2025-12-12T18:29:59.825047903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 18:29:59.825554 kubelet[2966]: E1212 18:29:59.825483 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:29:59.827203 kubelet[2966]: E1212 18:29:59.825566 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:29:59.827203 kubelet[2966]: E1212 18:29:59.825755 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:54e2ece65d4e4e24bb0d14e858b9af4a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kv6kk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d77754785-tzmkt_calico-system(f480f83d-d2ca-423c-8dd5-8e1df9a9ca33): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:29:59.829701 containerd[1664]: time="2025-12-12T18:29:59.829664824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:30:00.134536 containerd[1664]: time="2025-12-12T18:30:00.134277901Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:30:00.136012 containerd[1664]: time="2025-12-12T18:30:00.135958582Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:30:00.136121 containerd[1664]: time="2025-12-12T18:30:00.136022146Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 18:30:00.136961 kubelet[2966]: E1212 18:30:00.136543 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:30:00.136961 kubelet[2966]: E1212 18:30:00.136636 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:30:00.136961 kubelet[2966]: E1212 18:30:00.136872 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kv6kk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d77754785-tzmkt_calico-system(f480f83d-d2ca-423c-8dd5-8e1df9a9ca33): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:30:00.138203 kubelet[2966]: E1212 18:30:00.138127 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d77754785-tzmkt" podUID="f480f83d-d2ca-423c-8dd5-8e1df9a9ca33" Dec 12 18:30:05.502334 containerd[1664]: time="2025-12-12T18:30:05.502117869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:30:05.853409 containerd[1664]: time="2025-12-12T18:30:05.853209614Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:30:05.854608 containerd[1664]: time="2025-12-12T18:30:05.854558223Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:30:05.854755 containerd[1664]: time="2025-12-12T18:30:05.854571521Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:30:05.855061 kubelet[2966]: E1212 18:30:05.854994 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:30:05.856528 kubelet[2966]: E1212 18:30:05.855630 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:30:05.856528 kubelet[2966]: E1212 18:30:05.856016 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6szg9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-557c4cb5b5-v5k7t_calico-apiserver(dcf3ef86-77cd-46ff-befa-f79857cd4570): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:30:05.857399 containerd[1664]: time="2025-12-12T18:30:05.856486909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:30:05.858031 kubelet[2966]: E1212 18:30:05.857349 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-557c4cb5b5-v5k7t" podUID="dcf3ef86-77cd-46ff-befa-f79857cd4570" Dec 12 18:30:06.163490 containerd[1664]: time="2025-12-12T18:30:06.163210511Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:30:06.164651 containerd[1664]: time="2025-12-12T18:30:06.164601680Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:30:06.164921 containerd[1664]: time="2025-12-12T18:30:06.164627295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 18:30:06.165125 kubelet[2966]: E1212 18:30:06.165044 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:30:06.165657 kubelet[2966]: E1212 18:30:06.165146 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:30:06.165657 kubelet[2966]: E1212 18:30:06.165456 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9pjtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-84669789bf-xx9l4_calico-system(1d148975-a25d-444b-b4af-f56c99e82a44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:30:06.166883 kubelet[2966]: E1212 18:30:06.166729 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84669789bf-xx9l4" podUID="1d148975-a25d-444b-b4af-f56c99e82a44" Dec 12 18:30:09.497674 kubelet[2966]: E1212 18:30:09.497305 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwqst" podUID="1566f111-6981-4bf6-b05d-69ebc0c0ffaa" Dec 12 18:30:09.498718 kubelet[2966]: E1212 18:30:09.498645 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzv84" podUID="30dcceea-b67a-4ecb-b6c6-16baeb5ae67c" Dec 12 18:30:10.499385 kubelet[2966]: E1212 18:30:10.499307 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-557c4cb5b5-4c5jw" podUID="d20e72ff-82d7-435d-b1ff-5952de8d6823" Dec 12 18:30:13.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.23.66:22-139.178.89.65:51576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:30:13.280457 kernel: kauditd_printk_skb: 155 callbacks suppressed Dec 12 18:30:13.280571 kernel: audit: type=1130 audit(1765564213.275:761): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.23.66:22-139.178.89.65:51576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:30:13.275246 systemd[1]: Started sshd@9-10.230.23.66:22-139.178.89.65:51576.service - OpenSSH per-connection server daemon (139.178.89.65:51576). Dec 12 18:30:14.194549 sshd[5336]: Accepted publickey for core from 139.178.89.65 port 51576 ssh2: RSA SHA256:jo3Cp94RWwUYPMISUA0rnCA96kDhp7AbC5KdtynibHU Dec 12 18:30:14.194000 audit[5336]: USER_ACCT pid=5336 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:14.200000 audit[5336]: CRED_ACQ pid=5336 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:14.202797 kernel: audit: type=1101 audit(1765564214.194:762): pid=5336 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:14.202893 kernel: audit: type=1103 audit(1765564214.200:763): pid=5336 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:14.204558 sshd-session[5336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:30:14.206339 kernel: audit: type=1006 audit(1765564214.200:764): pid=5336 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 12 18:30:14.200000 audit[5336]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffee92731e0 a2=3 a3=0 items=0 ppid=1 pid=5336 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:30:14.211413 kernel: audit: type=1300 audit(1765564214.200:764): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffee92731e0 a2=3 a3=0 items=0 ppid=1 pid=5336 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:30:14.200000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:30:14.218205 kernel: audit: type=1327 audit(1765564214.200:764): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:30:14.223566 systemd-logind[1633]: New session 12 of user core. Dec 12 18:30:14.232666 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 18:30:14.239000 audit[5336]: USER_START pid=5336 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:14.248237 kernel: audit: type=1105 audit(1765564214.239:765): pid=5336 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:14.248000 audit[5339]: CRED_ACQ pid=5339 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:14.254282 kernel: audit: type=1103 audit(1765564214.248:766): pid=5339 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:15.330187 sshd[5339]: Connection closed by 139.178.89.65 port 51576 Dec 12 18:30:15.329452 sshd-session[5336]: pam_unix(sshd:session): session closed for user core Dec 12 18:30:15.334000 audit[5336]: USER_END pid=5336 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:15.348210 kernel: audit: type=1106 audit(1765564215.334:767): pid=5336 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:15.335000 audit[5336]: CRED_DISP pid=5336 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:15.350460 systemd-logind[1633]: Session 12 logged out. Waiting for processes to exit. Dec 12 18:30:15.354601 systemd[1]: sshd@9-10.230.23.66:22-139.178.89.65:51576.service: Deactivated successfully. Dec 12 18:30:15.357113 kernel: audit: type=1104 audit(1765564215.335:768): pid=5336 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:15.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.23.66:22-139.178.89.65:51576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:30:15.362266 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 18:30:15.367934 systemd-logind[1633]: Removed session 12. Dec 12 18:30:15.499114 kubelet[2966]: E1212 18:30:15.498955 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d77754785-tzmkt" podUID="f480f83d-d2ca-423c-8dd5-8e1df9a9ca33" Dec 12 18:30:19.498586 kubelet[2966]: E1212 18:30:19.498486 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-557c4cb5b5-v5k7t" podUID="dcf3ef86-77cd-46ff-befa-f79857cd4570" Dec 12 18:30:20.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.23.66:22-139.178.89.65:59176 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:30:20.486939 systemd[1]: Started sshd@10-10.230.23.66:22-139.178.89.65:59176.service - OpenSSH per-connection server daemon (139.178.89.65:59176). Dec 12 18:30:20.508012 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:30:20.508150 kernel: audit: type=1130 audit(1765564220.486:770): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.23.66:22-139.178.89.65:59176 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:30:20.508286 containerd[1664]: time="2025-12-12T18:30:20.504837408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:30:20.508910 kubelet[2966]: E1212 18:30:20.503627 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84669789bf-xx9l4" podUID="1d148975-a25d-444b-b4af-f56c99e82a44" Dec 12 18:30:20.839811 containerd[1664]: time="2025-12-12T18:30:20.839581232Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:30:20.841186 containerd[1664]: time="2025-12-12T18:30:20.841108298Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:30:20.841516 containerd[1664]: time="2025-12-12T18:30:20.841321249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 18:30:20.841918 kubelet[2966]: E1212 18:30:20.841839 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:30:20.842188 kubelet[2966]: E1212 18:30:20.842133 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:30:20.842787 kubelet[2966]: E1212 18:30:20.842703 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mw84z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-cwqst_calico-system(1566f111-6981-4bf6-b05d-69ebc0c0ffaa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:30:20.844716 kubelet[2966]: E1212 18:30:20.844655 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwqst" podUID="1566f111-6981-4bf6-b05d-69ebc0c0ffaa" Dec 12 18:30:21.338964 sshd[5354]: Accepted publickey for core from 139.178.89.65 port 59176 ssh2: RSA SHA256:jo3Cp94RWwUYPMISUA0rnCA96kDhp7AbC5KdtynibHU Dec 12 18:30:21.341879 sshd-session[5354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:30:21.360333 kernel: audit: type=1101 audit(1765564221.338:771): pid=5354 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:21.360553 kernel: audit: type=1103 audit(1765564221.339:772): pid=5354 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:21.360621 kernel: audit: type=1006 audit(1765564221.341:773): pid=5354 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 12 18:30:21.360665 kernel: audit: type=1300 audit(1765564221.341:773): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd738993f0 a2=3 a3=0 items=0 ppid=1 pid=5354 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:30:21.338000 audit[5354]: USER_ACCT pid=5354 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:21.339000 audit[5354]: CRED_ACQ pid=5354 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:21.341000 audit[5354]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd738993f0 a2=3 a3=0 items=0 ppid=1 pid=5354 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:30:21.341000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:30:21.366203 kernel: audit: type=1327 audit(1765564221.341:773): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:30:21.371716 systemd-logind[1633]: New session 13 of user core. Dec 12 18:30:21.378519 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 18:30:21.384000 audit[5354]: USER_START pid=5354 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:21.393206 kernel: audit: type=1105 audit(1765564221.384:774): pid=5354 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:21.393315 kernel: audit: type=1103 audit(1765564221.392:775): pid=5357 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:21.392000 audit[5357]: CRED_ACQ pid=5357 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:21.499017 containerd[1664]: time="2025-12-12T18:30:21.498148404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:30:21.809146 containerd[1664]: time="2025-12-12T18:30:21.808877019Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:30:21.810241 containerd[1664]: time="2025-12-12T18:30:21.809982970Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:30:21.810241 containerd[1664]: time="2025-12-12T18:30:21.810074807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:30:21.810635 kubelet[2966]: E1212 18:30:21.810559 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:30:21.810987 kubelet[2966]: E1212 18:30:21.810647 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:30:21.810987 kubelet[2966]: E1212 18:30:21.810890 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jqr7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-557c4cb5b5-4c5jw_calico-apiserver(d20e72ff-82d7-435d-b1ff-5952de8d6823): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:30:21.816985 kubelet[2966]: E1212 18:30:21.816924 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-557c4cb5b5-4c5jw" podUID="d20e72ff-82d7-435d-b1ff-5952de8d6823" Dec 12 18:30:21.974618 sshd[5357]: Connection closed by 139.178.89.65 port 59176 Dec 12 18:30:21.975359 sshd-session[5354]: pam_unix(sshd:session): session closed for user core Dec 12 18:30:21.978000 audit[5354]: USER_END pid=5354 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:21.990193 kernel: audit: type=1106 audit(1765564221.978:776): pid=5354 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:21.991836 systemd[1]: sshd@10-10.230.23.66:22-139.178.89.65:59176.service: Deactivated successfully. Dec 12 18:30:21.995989 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 18:30:21.978000 audit[5354]: CRED_DISP pid=5354 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:22.005254 kernel: audit: type=1104 audit(1765564221.978:777): pid=5354 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:21.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.23.66:22-139.178.89.65:59176 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:30:22.010444 systemd-logind[1633]: Session 13 logged out. Waiting for processes to exit. Dec 12 18:30:22.013974 systemd-logind[1633]: Removed session 13. Dec 12 18:30:24.499869 containerd[1664]: time="2025-12-12T18:30:24.499446695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:30:24.825086 containerd[1664]: time="2025-12-12T18:30:24.824861943Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:30:24.826457 containerd[1664]: time="2025-12-12T18:30:24.826391713Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:30:24.826579 containerd[1664]: time="2025-12-12T18:30:24.826433298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 18:30:24.826904 kubelet[2966]: E1212 18:30:24.826758 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:30:24.826904 kubelet[2966]: E1212 18:30:24.826859 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:30:24.827602 kubelet[2966]: E1212 18:30:24.827090 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j9hcc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mzv84_calico-system(30dcceea-b67a-4ecb-b6c6-16baeb5ae67c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:30:24.830499 containerd[1664]: time="2025-12-12T18:30:24.830135664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:30:25.173153 containerd[1664]: time="2025-12-12T18:30:25.172924766Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:30:25.174907 containerd[1664]: time="2025-12-12T18:30:25.174839186Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:30:25.174996 containerd[1664]: time="2025-12-12T18:30:25.174951537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 18:30:25.175310 kubelet[2966]: E1212 18:30:25.175253 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:30:25.175454 kubelet[2966]: E1212 18:30:25.175425 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:30:25.175901 kubelet[2966]: E1212 18:30:25.175839 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j9hcc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mzv84_calico-system(30dcceea-b67a-4ecb-b6c6-16baeb5ae67c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:30:25.177651 kubelet[2966]: E1212 18:30:25.177588 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzv84" podUID="30dcceea-b67a-4ecb-b6c6-16baeb5ae67c" Dec 12 18:30:27.129196 systemd[1]: Started sshd@11-10.230.23.66:22-139.178.89.65:59182.service - OpenSSH per-connection server daemon (139.178.89.65:59182). Dec 12 18:30:27.139949 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:30:27.140126 kernel: audit: type=1130 audit(1765564227.128:779): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.23.66:22-139.178.89.65:59182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:30:27.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.23.66:22-139.178.89.65:59182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:30:27.498860 containerd[1664]: time="2025-12-12T18:30:27.498061571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:30:27.834071 containerd[1664]: time="2025-12-12T18:30:27.832870339Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:30:27.835355 containerd[1664]: time="2025-12-12T18:30:27.835151959Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:30:27.835355 containerd[1664]: time="2025-12-12T18:30:27.835306829Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 18:30:27.836247 kubelet[2966]: E1212 18:30:27.835690 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:30:27.836247 kubelet[2966]: E1212 18:30:27.835798 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:30:27.838737 kubelet[2966]: E1212 18:30:27.837997 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:54e2ece65d4e4e24bb0d14e858b9af4a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kv6kk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d77754785-tzmkt_calico-system(f480f83d-d2ca-423c-8dd5-8e1df9a9ca33): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:30:27.840179 containerd[1664]: time="2025-12-12T18:30:27.840103969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:30:27.932000 audit[5377]: USER_ACCT pid=5377 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:27.935442 sshd-session[5377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:30:27.937914 sshd[5377]: Accepted publickey for core from 139.178.89.65 port 59182 ssh2: RSA SHA256:jo3Cp94RWwUYPMISUA0rnCA96kDhp7AbC5KdtynibHU Dec 12 18:30:27.933000 audit[5377]: CRED_ACQ pid=5377 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:27.940351 kernel: audit: type=1101 audit(1765564227.932:780): pid=5377 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:27.940443 kernel: audit: type=1103 audit(1765564227.933:781): pid=5377 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:27.933000 audit[5377]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc8470e7c0 a2=3 a3=0 items=0 ppid=1 pid=5377 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:30:27.953913 systemd-logind[1633]: New session 14 of user core. Dec 12 18:30:27.954929 kernel: audit: type=1006 audit(1765564227.933:782): pid=5377 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 12 18:30:27.955003 kernel: audit: type=1300 audit(1765564227.933:782): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc8470e7c0 a2=3 a3=0 items=0 ppid=1 pid=5377 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:30:27.933000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:30:27.964220 kernel: audit: type=1327 audit(1765564227.933:782): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:30:27.968652 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 18:30:27.973000 audit[5377]: USER_START pid=5377 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:27.981193 kernel: audit: type=1105 audit(1765564227.973:783): pid=5377 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:27.981000 audit[5380]: CRED_ACQ pid=5380 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:27.989209 kernel: audit: type=1103 audit(1765564227.981:784): pid=5380 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:28.161596 containerd[1664]: time="2025-12-12T18:30:28.161375031Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:30:28.163884 containerd[1664]: time="2025-12-12T18:30:28.163673840Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:30:28.163884 containerd[1664]: time="2025-12-12T18:30:28.163741536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 18:30:28.164319 kubelet[2966]: E1212 18:30:28.164225 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:30:28.164494 kubelet[2966]: E1212 18:30:28.164363 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:30:28.166279 kubelet[2966]: E1212 18:30:28.166213 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kv6kk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d77754785-tzmkt_calico-system(f480f83d-d2ca-423c-8dd5-8e1df9a9ca33): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:30:28.167615 kubelet[2966]: E1212 18:30:28.167524 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d77754785-tzmkt" podUID="f480f83d-d2ca-423c-8dd5-8e1df9a9ca33" Dec 12 18:30:28.489863 sshd[5380]: Connection closed by 139.178.89.65 port 59182 Dec 12 18:30:28.493000 audit[5377]: USER_END pid=5377 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:28.490793 sshd-session[5377]: pam_unix(sshd:session): session closed for user core Dec 12 18:30:28.504542 systemd[1]: sshd@11-10.230.23.66:22-139.178.89.65:59182.service: Deactivated successfully. Dec 12 18:30:28.509544 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 18:30:28.510194 kernel: audit: type=1106 audit(1765564228.493:785): pid=5377 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:28.519121 kernel: audit: type=1104 audit(1765564228.493:786): pid=5377 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:28.493000 audit[5377]: CRED_DISP pid=5377 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:28.519188 systemd-logind[1633]: Session 14 logged out. Waiting for processes to exit. Dec 12 18:30:28.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.23.66:22-139.178.89.65:59182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:30:28.522931 systemd-logind[1633]: Removed session 14. Dec 12 18:30:28.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.23.66:22-139.178.89.65:59192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:30:28.646863 systemd[1]: Started sshd@12-10.230.23.66:22-139.178.89.65:59192.service - OpenSSH per-connection server daemon (139.178.89.65:59192). Dec 12 18:30:29.432000 audit[5392]: USER_ACCT pid=5392 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:29.434047 sshd[5392]: Accepted publickey for core from 139.178.89.65 port 59192 ssh2: RSA SHA256:jo3Cp94RWwUYPMISUA0rnCA96kDhp7AbC5KdtynibHU Dec 12 18:30:29.434000 audit[5392]: CRED_ACQ pid=5392 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:29.434000 audit[5392]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc3e1b59f0 a2=3 a3=0 items=0 ppid=1 pid=5392 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:30:29.434000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:30:29.436720 sshd-session[5392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:30:29.448974 systemd-logind[1633]: New session 15 of user core. Dec 12 18:30:29.458744 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 18:30:29.466000 audit[5392]: USER_START pid=5392 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:29.469000 audit[5395]: CRED_ACQ pid=5395 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:30.055190 sshd[5395]: Connection closed by 139.178.89.65 port 59192 Dec 12 18:30:30.054514 sshd-session[5392]: pam_unix(sshd:session): session closed for user core Dec 12 18:30:30.055000 audit[5392]: USER_END pid=5392 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:30.055000 audit[5392]: CRED_DISP pid=5392 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:30.060436 systemd[1]: sshd@12-10.230.23.66:22-139.178.89.65:59192.service: Deactivated successfully. Dec 12 18:30:30.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.23.66:22-139.178.89.65:59192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:30:30.064459 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 18:30:30.067289 systemd-logind[1633]: Session 15 logged out. Waiting for processes to exit. Dec 12 18:30:30.070234 systemd-logind[1633]: Removed session 15. Dec 12 18:30:30.212788 systemd[1]: Started sshd@13-10.230.23.66:22-139.178.89.65:59206.service - OpenSSH per-connection server daemon (139.178.89.65:59206). Dec 12 18:30:30.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.23.66:22-139.178.89.65:59206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:30:31.022000 audit[5405]: USER_ACCT pid=5405 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:31.024262 sshd[5405]: Accepted publickey for core from 139.178.89.65 port 59206 ssh2: RSA SHA256:jo3Cp94RWwUYPMISUA0rnCA96kDhp7AbC5KdtynibHU Dec 12 18:30:31.024000 audit[5405]: CRED_ACQ pid=5405 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:31.024000 audit[5405]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1aab1490 a2=3 a3=0 items=0 ppid=1 pid=5405 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:30:31.024000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:30:31.027060 sshd-session[5405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:30:31.036644 systemd-logind[1633]: New session 16 of user core. Dec 12 18:30:31.043731 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 18:30:31.048000 audit[5405]: USER_START pid=5405 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:31.052000 audit[5408]: CRED_ACQ pid=5408 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:31.499672 containerd[1664]: time="2025-12-12T18:30:31.499591601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:30:31.586334 sshd[5408]: Connection closed by 139.178.89.65 port 59206 Dec 12 18:30:31.586013 sshd-session[5405]: pam_unix(sshd:session): session closed for user core Dec 12 18:30:31.589000 audit[5405]: USER_END pid=5405 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:31.589000 audit[5405]: CRED_DISP pid=5405 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:31.595715 systemd[1]: sshd@13-10.230.23.66:22-139.178.89.65:59206.service: Deactivated successfully. Dec 12 18:30:31.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.23.66:22-139.178.89.65:59206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:30:31.599796 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 18:30:31.603714 systemd-logind[1633]: Session 16 logged out. Waiting for processes to exit. Dec 12 18:30:31.605918 systemd-logind[1633]: Removed session 16. Dec 12 18:30:31.821484 containerd[1664]: time="2025-12-12T18:30:31.821205750Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:30:31.823123 containerd[1664]: time="2025-12-12T18:30:31.823023822Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:30:31.823123 containerd[1664]: time="2025-12-12T18:30:31.823082888Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 18:30:31.823889 kubelet[2966]: E1212 18:30:31.823368 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:30:31.823889 kubelet[2966]: E1212 18:30:31.823455 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:30:31.823889 kubelet[2966]: E1212 18:30:31.823718 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9pjtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-84669789bf-xx9l4_calico-system(1d148975-a25d-444b-b4af-f56c99e82a44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:30:31.825064 kubelet[2966]: E1212 18:30:31.824954 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84669789bf-xx9l4" podUID="1d148975-a25d-444b-b4af-f56c99e82a44" Dec 12 18:30:32.500148 kubelet[2966]: E1212 18:30:32.500048 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-557c4cb5b5-4c5jw" podUID="d20e72ff-82d7-435d-b1ff-5952de8d6823" Dec 12 18:30:33.499604 containerd[1664]: time="2025-12-12T18:30:33.499500941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:30:33.837612 containerd[1664]: time="2025-12-12T18:30:33.837199102Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:30:33.839099 containerd[1664]: time="2025-12-12T18:30:33.839004255Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:30:33.839408 containerd[1664]: time="2025-12-12T18:30:33.839084869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:30:33.839995 kubelet[2966]: E1212 18:30:33.839917 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:30:33.841408 kubelet[2966]: E1212 18:30:33.840022 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:30:33.841408 kubelet[2966]: E1212 18:30:33.840273 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6szg9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-557c4cb5b5-v5k7t_calico-apiserver(dcf3ef86-77cd-46ff-befa-f79857cd4570): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:30:33.842275 kubelet[2966]: E1212 18:30:33.841762 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-557c4cb5b5-v5k7t" podUID="dcf3ef86-77cd-46ff-befa-f79857cd4570" Dec 12 18:30:34.498774 kubelet[2966]: E1212 18:30:34.498663 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwqst" podUID="1566f111-6981-4bf6-b05d-69ebc0c0ffaa" Dec 12 18:30:36.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.23.66:22-139.178.89.65:48336 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:30:36.745886 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 12 18:30:36.746036 kernel: audit: type=1130 audit(1765564236.739:806): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.23.66:22-139.178.89.65:48336 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:30:36.740549 systemd[1]: Started sshd@14-10.230.23.66:22-139.178.89.65:48336.service - OpenSSH per-connection server daemon (139.178.89.65:48336). Dec 12 18:30:37.543000 audit[5421]: USER_ACCT pid=5421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:37.550014 sshd[5421]: Accepted publickey for core from 139.178.89.65 port 48336 ssh2: RSA SHA256:jo3Cp94RWwUYPMISUA0rnCA96kDhp7AbC5KdtynibHU Dec 12 18:30:37.552277 kernel: audit: type=1101 audit(1765564237.543:807): pid=5421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:37.551000 audit[5421]: CRED_ACQ pid=5421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:37.558684 sshd-session[5421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:30:37.560208 kernel: audit: type=1103 audit(1765564237.551:808): pid=5421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:37.564216 kernel: audit: type=1006 audit(1765564237.551:809): pid=5421 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 12 18:30:37.551000 audit[5421]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa0362640 a2=3 a3=0 items=0 ppid=1 pid=5421 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:30:37.570234 kernel: audit: type=1300 audit(1765564237.551:809): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa0362640 a2=3 a3=0 items=0 ppid=1 pid=5421 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:30:37.551000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:30:37.573193 kernel: audit: type=1327 audit(1765564237.551:809): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:30:37.578920 systemd-logind[1633]: New session 17 of user core. Dec 12 18:30:37.587532 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 18:30:37.593000 audit[5421]: USER_START pid=5421 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:37.601234 kernel: audit: type=1105 audit(1765564237.593:810): pid=5421 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:37.600000 audit[5424]: CRED_ACQ pid=5424 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:37.610223 kernel: audit: type=1103 audit(1765564237.600:811): pid=5424 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:38.087233 sshd[5424]: Connection closed by 139.178.89.65 port 48336 Dec 12 18:30:38.088193 sshd-session[5421]: pam_unix(sshd:session): session closed for user core Dec 12 18:30:38.089000 audit[5421]: USER_END pid=5421 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:38.097786 systemd[1]: sshd@14-10.230.23.66:22-139.178.89.65:48336.service: Deactivated successfully. Dec 12 18:30:38.089000 audit[5421]: CRED_DISP pid=5421 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:38.100452 kernel: audit: type=1106 audit(1765564238.089:812): pid=5421 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:38.100537 kernel: audit: type=1104 audit(1765564238.089:813): pid=5421 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:38.102357 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 18:30:38.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.23.66:22-139.178.89.65:48336 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:30:38.106253 systemd-logind[1633]: Session 17 logged out. Waiting for processes to exit. Dec 12 18:30:38.108461 systemd-logind[1633]: Removed session 17. Dec 12 18:30:39.498154 kubelet[2966]: E1212 18:30:39.497972 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzv84" podUID="30dcceea-b67a-4ecb-b6c6-16baeb5ae67c" Dec 12 18:30:42.500873 kubelet[2966]: E1212 18:30:42.500345 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d77754785-tzmkt" podUID="f480f83d-d2ca-423c-8dd5-8e1df9a9ca33" Dec 12 18:30:43.256026 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:30:43.256221 kernel: audit: type=1130 audit(1765564243.244:815): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.23.66:22-139.178.89.65:39602 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:30:43.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.23.66:22-139.178.89.65:39602 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:30:43.245957 systemd[1]: Started sshd@15-10.230.23.66:22-139.178.89.65:39602.service - OpenSSH per-connection server daemon (139.178.89.65:39602). Dec 12 18:30:44.035000 audit[5466]: USER_ACCT pid=5466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:44.044363 sshd-session[5466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:30:44.047087 kernel: audit: type=1101 audit(1765564244.035:816): pid=5466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:44.047376 sshd[5466]: Accepted publickey for core from 139.178.89.65 port 39602 ssh2: RSA SHA256:jo3Cp94RWwUYPMISUA0rnCA96kDhp7AbC5KdtynibHU Dec 12 18:30:44.041000 audit[5466]: CRED_ACQ pid=5466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:44.053730 kernel: audit: type=1103 audit(1765564244.041:817): pid=5466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:44.053817 kernel: audit: type=1006 audit(1765564244.041:818): pid=5466 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 12 18:30:44.041000 audit[5466]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd9b54d260 a2=3 a3=0 items=0 ppid=1 pid=5466 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:30:44.058383 kernel: audit: type=1300 audit(1765564244.041:818): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd9b54d260 a2=3 a3=0 items=0 ppid=1 pid=5466 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:30:44.041000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:30:44.063177 kernel: audit: type=1327 audit(1765564244.041:818): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:30:44.066493 systemd-logind[1633]: New session 18 of user core. Dec 12 18:30:44.074578 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 18:30:44.080000 audit[5466]: USER_START pid=5466 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:44.088295 kernel: audit: type=1105 audit(1765564244.080:819): pid=5466 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:44.087000 audit[5469]: CRED_ACQ pid=5469 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:44.094221 kernel: audit: type=1103 audit(1765564244.087:820): pid=5469 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:44.500374 kubelet[2966]: E1212 18:30:44.500112 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-557c4cb5b5-4c5jw" podUID="d20e72ff-82d7-435d-b1ff-5952de8d6823" Dec 12 18:30:44.599224 sshd[5469]: Connection closed by 139.178.89.65 port 39602 Dec 12 18:30:44.599975 sshd-session[5466]: pam_unix(sshd:session): session closed for user core Dec 12 18:30:44.601000 audit[5466]: USER_END pid=5466 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:44.607787 systemd[1]: sshd@15-10.230.23.66:22-139.178.89.65:39602.service: Deactivated successfully. Dec 12 18:30:44.601000 audit[5466]: CRED_DISP pid=5466 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:44.610192 kernel: audit: type=1106 audit(1765564244.601:821): pid=5466 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:44.610298 kernel: audit: type=1104 audit(1765564244.601:822): pid=5466 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:44.612841 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 18:30:44.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.23.66:22-139.178.89.65:39602 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:30:44.619208 systemd-logind[1633]: Session 18 logged out. Waiting for processes to exit. Dec 12 18:30:44.622932 systemd-logind[1633]: Removed session 18. Dec 12 18:30:46.497799 kubelet[2966]: E1212 18:30:46.497647 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-557c4cb5b5-v5k7t" podUID="dcf3ef86-77cd-46ff-befa-f79857cd4570" Dec 12 18:30:47.500037 kubelet[2966]: E1212 18:30:47.499947 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84669789bf-xx9l4" podUID="1d148975-a25d-444b-b4af-f56c99e82a44" Dec 12 18:30:48.498230 kubelet[2966]: E1212 18:30:48.497782 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwqst" podUID="1566f111-6981-4bf6-b05d-69ebc0c0ffaa" Dec 12 18:30:49.761678 systemd[1]: Started sshd@16-10.230.23.66:22-139.178.89.65:39606.service - OpenSSH per-connection server daemon (139.178.89.65:39606). Dec 12 18:30:49.768959 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:30:49.769036 kernel: audit: type=1130 audit(1765564249.760:824): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.23.66:22-139.178.89.65:39606 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:30:49.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.23.66:22-139.178.89.65:39606 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:30:50.549000 audit[5484]: USER_ACCT pid=5484 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:50.551548 sshd[5484]: Accepted publickey for core from 139.178.89.65 port 39606 ssh2: RSA SHA256:jo3Cp94RWwUYPMISUA0rnCA96kDhp7AbC5KdtynibHU Dec 12 18:30:50.557530 kernel: audit: type=1101 audit(1765564250.549:825): pid=5484 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:50.557737 sshd-session[5484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:30:50.556000 audit[5484]: CRED_ACQ pid=5484 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:50.568926 kernel: audit: type=1103 audit(1765564250.556:826): pid=5484 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:50.569118 kernel: audit: type=1006 audit(1765564250.556:827): pid=5484 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Dec 12 18:30:50.556000 audit[5484]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0fcbe870 a2=3 a3=0 items=0 ppid=1 pid=5484 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:30:50.577494 kernel: audit: type=1300 audit(1765564250.556:827): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0fcbe870 a2=3 a3=0 items=0 ppid=1 pid=5484 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:30:50.556000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:30:50.580441 kernel: audit: type=1327 audit(1765564250.556:827): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:30:50.587326 systemd-logind[1633]: New session 19 of user core. Dec 12 18:30:50.593428 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 18:30:50.600000 audit[5484]: USER_START pid=5484 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:50.608197 kernel: audit: type=1105 audit(1765564250.600:828): pid=5484 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:50.608354 kernel: audit: type=1103 audit(1765564250.603:829): pid=5489 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:50.603000 audit[5489]: CRED_ACQ pid=5489 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:51.098817 sshd[5489]: Connection closed by 139.178.89.65 port 39606 Dec 12 18:30:51.099888 sshd-session[5484]: pam_unix(sshd:session): session closed for user core Dec 12 18:30:51.101000 audit[5484]: USER_END pid=5484 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:51.112364 kernel: audit: type=1106 audit(1765564251.101:830): pid=5484 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:51.115341 kernel: audit: type=1104 audit(1765564251.101:831): pid=5484 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:51.101000 audit[5484]: CRED_DISP pid=5484 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:51.118992 systemd[1]: sshd@16-10.230.23.66:22-139.178.89.65:39606.service: Deactivated successfully. Dec 12 18:30:51.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.23.66:22-139.178.89.65:39606 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:30:51.124854 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 18:30:51.126642 systemd-logind[1633]: Session 19 logged out. Waiting for processes to exit. Dec 12 18:30:51.129641 systemd-logind[1633]: Removed session 19. Dec 12 18:30:51.353719 systemd[1]: Started sshd@17-10.230.23.66:22-139.178.89.65:53682.service - OpenSSH per-connection server daemon (139.178.89.65:53682). Dec 12 18:30:51.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.23.66:22-139.178.89.65:53682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:30:51.501533 kubelet[2966]: E1212 18:30:51.501324 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzv84" podUID="30dcceea-b67a-4ecb-b6c6-16baeb5ae67c" Dec 12 18:30:52.212531 sshd[5500]: Accepted publickey for core from 139.178.89.65 port 53682 ssh2: RSA SHA256:jo3Cp94RWwUYPMISUA0rnCA96kDhp7AbC5KdtynibHU Dec 12 18:30:52.211000 audit[5500]: USER_ACCT pid=5500 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:52.212000 audit[5500]: CRED_ACQ pid=5500 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:52.212000 audit[5500]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcf6b36d30 a2=3 a3=0 items=0 ppid=1 pid=5500 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:30:52.212000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:30:52.214558 sshd-session[5500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:30:52.224568 systemd-logind[1633]: New session 20 of user core. Dec 12 18:30:52.233496 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 18:30:52.239000 audit[5500]: USER_START pid=5500 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:52.242000 audit[5503]: CRED_ACQ pid=5503 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:53.212601 sshd[5503]: Connection closed by 139.178.89.65 port 53682 Dec 12 18:30:53.217834 sshd-session[5500]: pam_unix(sshd:session): session closed for user core Dec 12 18:30:53.222000 audit[5500]: USER_END pid=5500 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:53.222000 audit[5500]: CRED_DISP pid=5500 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:53.232572 systemd[1]: sshd@17-10.230.23.66:22-139.178.89.65:53682.service: Deactivated successfully. Dec 12 18:30:53.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.23.66:22-139.178.89.65:53682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:30:53.236645 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 18:30:53.240434 systemd-logind[1633]: Session 20 logged out. Waiting for processes to exit. Dec 12 18:30:53.242774 systemd-logind[1633]: Removed session 20. Dec 12 18:30:53.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.23.66:22-139.178.89.65:53698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:30:53.358378 systemd[1]: Started sshd@18-10.230.23.66:22-139.178.89.65:53698.service - OpenSSH per-connection server daemon (139.178.89.65:53698). Dec 12 18:30:54.172000 audit[5513]: USER_ACCT pid=5513 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:54.173915 sshd[5513]: Accepted publickey for core from 139.178.89.65 port 53698 ssh2: RSA SHA256:jo3Cp94RWwUYPMISUA0rnCA96kDhp7AbC5KdtynibHU Dec 12 18:30:54.175000 audit[5513]: CRED_ACQ pid=5513 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:54.175000 audit[5513]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffedddf8bc0 a2=3 a3=0 items=0 ppid=1 pid=5513 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:30:54.175000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:30:54.177560 sshd-session[5513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:30:54.185882 systemd-logind[1633]: New session 21 of user core. Dec 12 18:30:54.194779 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 18:30:54.199000 audit[5513]: USER_START pid=5513 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:54.202000 audit[5516]: CRED_ACQ pid=5516 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:55.418245 kernel: kauditd_printk_skb: 20 callbacks suppressed Dec 12 18:30:55.422992 kernel: audit: type=1325 audit(1765564255.402:848): table=filter:150 family=2 entries=26 op=nft_register_rule pid=5526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:30:55.423091 kernel: audit: type=1300 audit(1765564255.402:848): arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe5373b780 a2=0 a3=7ffe5373b76c items=0 ppid=3085 pid=5526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:30:55.402000 audit[5526]: NETFILTER_CFG table=filter:150 family=2 entries=26 op=nft_register_rule pid=5526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:30:55.402000 audit[5526]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe5373b780 a2=0 a3=7ffe5373b76c items=0 ppid=3085 pid=5526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:30:55.402000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:30:55.439474 kernel: audit: type=1327 audit(1765564255.402:848): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:30:55.445000 audit[5526]: NETFILTER_CFG table=nat:151 family=2 entries=20 op=nft_register_rule pid=5526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:30:55.451230 kernel: audit: type=1325 audit(1765564255.445:849): table=nat:151 family=2 entries=20 op=nft_register_rule pid=5526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:30:55.445000 audit[5526]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe5373b780 a2=0 a3=0 items=0 ppid=3085 pid=5526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:30:55.459259 kernel: audit: type=1300 audit(1765564255.445:849): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe5373b780 a2=0 a3=0 items=0 ppid=3085 pid=5526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:30:55.445000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:30:55.464593 kernel: audit: type=1327 audit(1765564255.445:849): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:30:55.492000 audit[5528]: NETFILTER_CFG table=filter:152 family=2 entries=38 op=nft_register_rule pid=5528 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:30:55.497190 kernel: audit: type=1325 audit(1765564255.492:850): table=filter:152 family=2 entries=38 op=nft_register_rule pid=5528 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:30:55.492000 audit[5528]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc92b4cb90 a2=0 a3=7ffc92b4cb7c items=0 ppid=3085 pid=5528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:30:55.505229 kernel: audit: type=1300 audit(1765564255.492:850): arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc92b4cb90 a2=0 a3=7ffc92b4cb7c items=0 ppid=3085 pid=5528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:30:55.505455 kernel: audit: type=1327 audit(1765564255.492:850): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:30:55.492000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:30:55.503000 audit[5528]: NETFILTER_CFG table=nat:153 family=2 entries=20 op=nft_register_rule pid=5528 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:30:55.509757 kernel: audit: type=1325 audit(1765564255.503:851): table=nat:153 family=2 entries=20 op=nft_register_rule pid=5528 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:30:55.503000 audit[5528]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc92b4cb90 a2=0 a3=0 items=0 ppid=3085 pid=5528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:30:55.503000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:30:55.631214 sshd[5516]: Connection closed by 139.178.89.65 port 53698 Dec 12 18:30:55.632341 sshd-session[5513]: pam_unix(sshd:session): session closed for user core Dec 12 18:30:55.641000 audit[5513]: USER_END pid=5513 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:55.642000 audit[5513]: CRED_DISP pid=5513 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:55.650155 systemd[1]: sshd@18-10.230.23.66:22-139.178.89.65:53698.service: Deactivated successfully. Dec 12 18:30:55.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.23.66:22-139.178.89.65:53698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:30:55.656648 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 18:30:55.659415 systemd-logind[1633]: Session 21 logged out. Waiting for processes to exit. Dec 12 18:30:55.662574 systemd-logind[1633]: Removed session 21. Dec 12 18:30:55.790781 systemd[1]: Started sshd@19-10.230.23.66:22-139.178.89.65:53702.service - OpenSSH per-connection server daemon (139.178.89.65:53702). Dec 12 18:30:55.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.23.66:22-139.178.89.65:53702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:30:56.499964 kubelet[2966]: E1212 18:30:56.499065 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-557c4cb5b5-4c5jw" podUID="d20e72ff-82d7-435d-b1ff-5952de8d6823" Dec 12 18:30:56.501530 kubelet[2966]: E1212 18:30:56.501330 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d77754785-tzmkt" podUID="f480f83d-d2ca-423c-8dd5-8e1df9a9ca33" Dec 12 18:30:56.605000 audit[5533]: USER_ACCT pid=5533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:56.607303 sshd[5533]: Accepted publickey for core from 139.178.89.65 port 53702 ssh2: RSA SHA256:jo3Cp94RWwUYPMISUA0rnCA96kDhp7AbC5KdtynibHU Dec 12 18:30:56.607000 audit[5533]: CRED_ACQ pid=5533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:56.607000 audit[5533]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc04b8a980 a2=3 a3=0 items=0 ppid=1 pid=5533 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:30:56.607000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:30:56.609612 sshd-session[5533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:30:56.621861 systemd-logind[1633]: New session 22 of user core. Dec 12 18:30:56.631447 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 18:30:56.637000 audit[5533]: USER_START pid=5533 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:56.640000 audit[5536]: CRED_ACQ pid=5536 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:57.422980 sshd[5536]: Connection closed by 139.178.89.65 port 53702 Dec 12 18:30:57.424007 sshd-session[5533]: pam_unix(sshd:session): session closed for user core Dec 12 18:30:57.425000 audit[5533]: USER_END pid=5533 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:57.425000 audit[5533]: CRED_DISP pid=5533 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:57.431332 systemd[1]: sshd@19-10.230.23.66:22-139.178.89.65:53702.service: Deactivated successfully. Dec 12 18:30:57.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.23.66:22-139.178.89.65:53702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:30:57.436795 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 18:30:57.439025 systemd-logind[1633]: Session 22 logged out. Waiting for processes to exit. Dec 12 18:30:57.441516 systemd-logind[1633]: Removed session 22. Dec 12 18:30:57.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.23.66:22-139.178.89.65:53712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:30:57.585724 systemd[1]: Started sshd@20-10.230.23.66:22-139.178.89.65:53712.service - OpenSSH per-connection server daemon (139.178.89.65:53712). Dec 12 18:30:58.395000 audit[5546]: USER_ACCT pid=5546 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:58.397354 sshd[5546]: Accepted publickey for core from 139.178.89.65 port 53712 ssh2: RSA SHA256:jo3Cp94RWwUYPMISUA0rnCA96kDhp7AbC5KdtynibHU Dec 12 18:30:58.398000 audit[5546]: CRED_ACQ pid=5546 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:58.398000 audit[5546]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe1ebbf5e0 a2=3 a3=0 items=0 ppid=1 pid=5546 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:30:58.398000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:30:58.400330 sshd-session[5546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:30:58.410445 systemd-logind[1633]: New session 23 of user core. Dec 12 18:30:58.416424 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 18:30:58.421000 audit[5546]: USER_START pid=5546 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:58.425000 audit[5549]: CRED_ACQ pid=5549 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:58.946846 sshd[5549]: Connection closed by 139.178.89.65 port 53712 Dec 12 18:30:58.948284 sshd-session[5546]: pam_unix(sshd:session): session closed for user core Dec 12 18:30:58.950000 audit[5546]: USER_END pid=5546 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:58.950000 audit[5546]: CRED_DISP pid=5546 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:30:58.956012 systemd[1]: sshd@20-10.230.23.66:22-139.178.89.65:53712.service: Deactivated successfully. Dec 12 18:30:58.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.23.66:22-139.178.89.65:53712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:30:58.958745 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 18:30:58.960095 systemd-logind[1633]: Session 23 logged out. Waiting for processes to exit. Dec 12 18:30:58.962896 systemd-logind[1633]: Removed session 23. Dec 12 18:31:01.497417 kubelet[2966]: E1212 18:31:01.497289 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-557c4cb5b5-v5k7t" podUID="dcf3ef86-77cd-46ff-befa-f79857cd4570" Dec 12 18:31:02.502235 kubelet[2966]: E1212 18:31:02.502101 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84669789bf-xx9l4" podUID="1d148975-a25d-444b-b4af-f56c99e82a44" Dec 12 18:31:03.498456 kubelet[2966]: E1212 18:31:03.498150 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzv84" podUID="30dcceea-b67a-4ecb-b6c6-16baeb5ae67c" Dec 12 18:31:03.513395 containerd[1664]: time="2025-12-12T18:31:03.497472382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:31:03.858951 containerd[1664]: time="2025-12-12T18:31:03.858645114Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:31:03.862886 containerd[1664]: time="2025-12-12T18:31:03.862618686Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:31:03.862886 containerd[1664]: time="2025-12-12T18:31:03.862655381Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 18:31:03.863326 kubelet[2966]: E1212 18:31:03.863230 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:31:03.864723 kubelet[2966]: E1212 18:31:03.863372 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:31:03.864723 kubelet[2966]: E1212 18:31:03.863895 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mw84z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-cwqst_calico-system(1566f111-6981-4bf6-b05d-69ebc0c0ffaa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:31:03.865238 kubelet[2966]: E1212 18:31:03.865192 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwqst" podUID="1566f111-6981-4bf6-b05d-69ebc0c0ffaa" Dec 12 18:31:04.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.23.66:22-139.178.89.65:33304 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:31:04.113275 kernel: kauditd_printk_skb: 27 callbacks suppressed Dec 12 18:31:04.113365 kernel: audit: type=1130 audit(1765564264.110:873): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.23.66:22-139.178.89.65:33304 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:31:04.110763 systemd[1]: Started sshd@21-10.230.23.66:22-139.178.89.65:33304.service - OpenSSH per-connection server daemon (139.178.89.65:33304). Dec 12 18:31:04.933000 audit[5567]: USER_ACCT pid=5567 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:31:04.935302 sshd[5567]: Accepted publickey for core from 139.178.89.65 port 33304 ssh2: RSA SHA256:jo3Cp94RWwUYPMISUA0rnCA96kDhp7AbC5KdtynibHU Dec 12 18:31:04.940016 sshd-session[5567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:31:04.937000 audit[5567]: CRED_ACQ pid=5567 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:31:04.948632 kernel: audit: type=1101 audit(1765564264.933:874): pid=5567 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:31:04.948736 kernel: audit: type=1103 audit(1765564264.937:875): pid=5567 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:31:04.953214 kernel: audit: type=1006 audit(1765564264.937:876): pid=5567 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 12 18:31:04.937000 audit[5567]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda11b5550 a2=3 a3=0 items=0 ppid=1 pid=5567 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:31:04.964212 kernel: audit: type=1300 audit(1765564264.937:876): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda11b5550 a2=3 a3=0 items=0 ppid=1 pid=5567 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:31:04.937000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:31:04.967215 kernel: audit: type=1327 audit(1765564264.937:876): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:31:04.968464 systemd-logind[1633]: New session 24 of user core. Dec 12 18:31:04.976470 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 12 18:31:04.981000 audit[5567]: USER_START pid=5567 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:31:04.989204 kernel: audit: type=1105 audit(1765564264.981:877): pid=5567 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:31:04.989000 audit[5570]: CRED_ACQ pid=5570 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:31:04.996206 kernel: audit: type=1103 audit(1765564264.989:878): pid=5570 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:31:05.233000 audit[5573]: NETFILTER_CFG table=filter:154 family=2 entries=26 op=nft_register_rule pid=5573 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:31:05.241205 kernel: audit: type=1325 audit(1765564265.233:879): table=filter:154 family=2 entries=26 op=nft_register_rule pid=5573 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:31:05.233000 audit[5573]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffcc89b380 a2=0 a3=7fffcc89b36c items=0 ppid=3085 pid=5573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:31:05.249475 kernel: audit: type=1300 audit(1765564265.233:879): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffcc89b380 a2=0 a3=7fffcc89b36c items=0 ppid=3085 pid=5573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:31:05.233000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:31:05.250000 audit[5573]: NETFILTER_CFG table=nat:155 family=2 entries=104 op=nft_register_chain pid=5573 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:31:05.250000 audit[5573]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fffcc89b380 a2=0 a3=7fffcc89b36c items=0 ppid=3085 pid=5573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:31:05.250000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:31:05.503650 sshd[5570]: Connection closed by 139.178.89.65 port 33304 Dec 12 18:31:05.506619 sshd-session[5567]: pam_unix(sshd:session): session closed for user core Dec 12 18:31:05.509000 audit[5567]: USER_END pid=5567 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:31:05.510000 audit[5567]: CRED_DISP pid=5567 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:31:05.516788 systemd-logind[1633]: Session 24 logged out. Waiting for processes to exit. Dec 12 18:31:05.517539 systemd[1]: sshd@21-10.230.23.66:22-139.178.89.65:33304.service: Deactivated successfully. Dec 12 18:31:05.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.23.66:22-139.178.89.65:33304 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:31:05.522677 systemd[1]: session-24.scope: Deactivated successfully. Dec 12 18:31:05.527197 systemd-logind[1633]: Removed session 24. Dec 12 18:31:09.497597 containerd[1664]: time="2025-12-12T18:31:09.497302041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:31:09.813902 containerd[1664]: time="2025-12-12T18:31:09.813685602Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:31:09.815500 containerd[1664]: time="2025-12-12T18:31:09.815416050Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:31:09.815500 containerd[1664]: time="2025-12-12T18:31:09.815463287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:31:09.815954 kubelet[2966]: E1212 18:31:09.815850 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:31:09.817404 kubelet[2966]: E1212 18:31:09.815986 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:31:09.817404 kubelet[2966]: E1212 18:31:09.816304 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jqr7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-557c4cb5b5-4c5jw_calico-apiserver(d20e72ff-82d7-435d-b1ff-5952de8d6823): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:31:09.817757 kubelet[2966]: E1212 18:31:09.817559 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-557c4cb5b5-4c5jw" podUID="d20e72ff-82d7-435d-b1ff-5952de8d6823" Dec 12 18:31:10.680060 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 12 18:31:10.680265 kernel: audit: type=1130 audit(1765564270.662:884): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.23.66:22-139.178.89.65:36270 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:31:10.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.23.66:22-139.178.89.65:36270 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:31:10.666842 systemd[1]: Started sshd@22-10.230.23.66:22-139.178.89.65:36270.service - OpenSSH per-connection server daemon (139.178.89.65:36270). Dec 12 18:31:11.474000 audit[5584]: USER_ACCT pid=5584 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:31:11.485946 kernel: audit: type=1101 audit(1765564271.474:885): pid=5584 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:31:11.486535 sshd[5584]: Accepted publickey for core from 139.178.89.65 port 36270 ssh2: RSA SHA256:jo3Cp94RWwUYPMISUA0rnCA96kDhp7AbC5KdtynibHU Dec 12 18:31:11.495183 kernel: audit: type=1103 audit(1765564271.488:886): pid=5584 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:31:11.488000 audit[5584]: CRED_ACQ pid=5584 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:31:11.490665 sshd-session[5584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:31:11.504890 kernel: audit: type=1006 audit(1765564271.488:887): pid=5584 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 12 18:31:11.488000 audit[5584]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd67f0a0d0 a2=3 a3=0 items=0 ppid=1 pid=5584 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:31:11.510542 containerd[1664]: time="2025-12-12T18:31:11.508285696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:31:11.513281 kernel: audit: type=1300 audit(1765564271.488:887): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd67f0a0d0 a2=3 a3=0 items=0 ppid=1 pid=5584 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:31:11.488000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:31:11.521982 kernel: audit: type=1327 audit(1765564271.488:887): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:31:11.520035 systemd-logind[1633]: New session 25 of user core. Dec 12 18:31:11.525469 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 12 18:31:11.536000 audit[5584]: USER_START pid=5584 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:31:11.544957 kernel: audit: type=1105 audit(1765564271.536:888): pid=5584 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:31:11.545916 kernel: audit: type=1103 audit(1765564271.543:889): pid=5613 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:31:11.543000 audit[5613]: CRED_ACQ pid=5613 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:31:11.835665 containerd[1664]: time="2025-12-12T18:31:11.835422607Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:31:11.836605 containerd[1664]: time="2025-12-12T18:31:11.836549647Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:31:11.836701 containerd[1664]: time="2025-12-12T18:31:11.836672075Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 18:31:11.837438 kubelet[2966]: E1212 18:31:11.837376 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:31:11.840580 kubelet[2966]: E1212 18:31:11.837460 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:31:11.840580 kubelet[2966]: E1212 18:31:11.837652 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:54e2ece65d4e4e24bb0d14e858b9af4a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kv6kk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d77754785-tzmkt_calico-system(f480f83d-d2ca-423c-8dd5-8e1df9a9ca33): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:31:11.845720 containerd[1664]: time="2025-12-12T18:31:11.845650402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:31:12.071417 sshd[5613]: Connection closed by 139.178.89.65 port 36270 Dec 12 18:31:12.073752 sshd-session[5584]: pam_unix(sshd:session): session closed for user core Dec 12 18:31:12.080000 audit[5584]: USER_END pid=5584 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:31:12.092523 kernel: audit: type=1106 audit(1765564272.080:890): pid=5584 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:31:12.092674 systemd[1]: sshd@22-10.230.23.66:22-139.178.89.65:36270.service: Deactivated successfully. Dec 12 18:31:12.098895 systemd[1]: session-25.scope: Deactivated successfully. Dec 12 18:31:12.103263 systemd-logind[1633]: Session 25 logged out. Waiting for processes to exit. Dec 12 18:31:12.080000 audit[5584]: CRED_DISP pid=5584 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:31:12.110210 kernel: audit: type=1104 audit(1765564272.080:891): pid=5584 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:31:12.112253 systemd-logind[1633]: Removed session 25. Dec 12 18:31:12.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.23.66:22-139.178.89.65:36270 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:31:12.166876 containerd[1664]: time="2025-12-12T18:31:12.166761322Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:31:12.169691 containerd[1664]: time="2025-12-12T18:31:12.169558258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 18:31:12.169885 containerd[1664]: time="2025-12-12T18:31:12.169567238Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:31:12.170739 kubelet[2966]: E1212 18:31:12.170524 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:31:12.172197 kubelet[2966]: E1212 18:31:12.170874 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:31:12.173284 kubelet[2966]: E1212 18:31:12.173183 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kv6kk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d77754785-tzmkt_calico-system(f480f83d-d2ca-423c-8dd5-8e1df9a9ca33): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:31:12.174560 kubelet[2966]: E1212 18:31:12.174473 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d77754785-tzmkt" podUID="f480f83d-d2ca-423c-8dd5-8e1df9a9ca33" Dec 12 18:31:14.501740 containerd[1664]: time="2025-12-12T18:31:14.501686242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:31:14.503712 kubelet[2966]: E1212 18:31:14.503091 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cwqst" podUID="1566f111-6981-4bf6-b05d-69ebc0c0ffaa" Dec 12 18:31:14.827755 containerd[1664]: time="2025-12-12T18:31:14.827405822Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:31:14.829290 containerd[1664]: time="2025-12-12T18:31:14.829144734Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:31:14.829290 containerd[1664]: time="2025-12-12T18:31:14.829244453Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 18:31:14.830026 kubelet[2966]: E1212 18:31:14.829968 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:31:14.830258 kubelet[2966]: E1212 18:31:14.830218 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:31:14.830924 kubelet[2966]: E1212 18:31:14.830838 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j9hcc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mzv84_calico-system(30dcceea-b67a-4ecb-b6c6-16baeb5ae67c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:31:14.834040 containerd[1664]: time="2025-12-12T18:31:14.833954652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:31:15.137681 containerd[1664]: time="2025-12-12T18:31:15.137499541Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:31:15.140338 containerd[1664]: time="2025-12-12T18:31:15.140234417Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:31:15.140338 containerd[1664]: time="2025-12-12T18:31:15.140293369Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 18:31:15.141129 kubelet[2966]: E1212 18:31:15.141076 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:31:15.141256 kubelet[2966]: E1212 18:31:15.141149 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:31:15.141451 kubelet[2966]: E1212 18:31:15.141357 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j9hcc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mzv84_calico-system(30dcceea-b67a-4ecb-b6c6-16baeb5ae67c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:31:15.142904 kubelet[2966]: E1212 18:31:15.142857 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzv84" podUID="30dcceea-b67a-4ecb-b6c6-16baeb5ae67c" Dec 12 18:31:15.498189 containerd[1664]: time="2025-12-12T18:31:15.497876775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:31:15.808490 containerd[1664]: time="2025-12-12T18:31:15.807913940Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:31:15.812236 containerd[1664]: time="2025-12-12T18:31:15.812188562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:31:15.812436 containerd[1664]: time="2025-12-12T18:31:15.812154031Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:31:15.812980 kubelet[2966]: E1212 18:31:15.812921 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:31:15.814095 kubelet[2966]: E1212 18:31:15.813450 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:31:15.814095 kubelet[2966]: E1212 18:31:15.813826 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6szg9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-557c4cb5b5-v5k7t_calico-apiserver(dcf3ef86-77cd-46ff-befa-f79857cd4570): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:31:15.815718 kubelet[2966]: E1212 18:31:15.815154 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-557c4cb5b5-v5k7t" podUID="dcf3ef86-77cd-46ff-befa-f79857cd4570" Dec 12 18:31:15.815805 containerd[1664]: time="2025-12-12T18:31:15.814736348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:31:16.147835 containerd[1664]: time="2025-12-12T18:31:16.147523939Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:31:16.150964 containerd[1664]: time="2025-12-12T18:31:16.150883148Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:31:16.151278 containerd[1664]: time="2025-12-12T18:31:16.151035742Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 18:31:16.151481 kubelet[2966]: E1212 18:31:16.151400 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:31:16.151582 kubelet[2966]: E1212 18:31:16.151559 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:31:16.152275 kubelet[2966]: E1212 18:31:16.151890 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9pjtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-84669789bf-xx9l4_calico-system(1d148975-a25d-444b-b4af-f56c99e82a44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:31:16.153314 kubelet[2966]: E1212 18:31:16.153114 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84669789bf-xx9l4" podUID="1d148975-a25d-444b-b4af-f56c99e82a44" Dec 12 18:31:17.256071 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:31:17.256411 kernel: audit: type=1130 audit(1765564277.242:893): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.23.66:22-139.178.89.65:36276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:31:17.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.23.66:22-139.178.89.65:36276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:31:17.243431 systemd[1]: Started sshd@23-10.230.23.66:22-139.178.89.65:36276.service - OpenSSH per-connection server daemon (139.178.89.65:36276). Dec 12 18:31:18.110000 audit[5647]: USER_ACCT pid=5647 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:31:18.117523 sshd[5647]: Accepted publickey for core from 139.178.89.65 port 36276 ssh2: RSA SHA256:jo3Cp94RWwUYPMISUA0rnCA96kDhp7AbC5KdtynibHU Dec 12 18:31:18.121198 kernel: audit: type=1101 audit(1765564278.110:894): pid=5647 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:31:18.120000 audit[5647]: CRED_ACQ pid=5647 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:31:18.125705 sshd-session[5647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:31:18.127186 kernel: audit: type=1103 audit(1765564278.120:895): pid=5647 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:31:18.131190 kernel: audit: type=1006 audit(1765564278.120:896): pid=5647 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 12 18:31:18.120000 audit[5647]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc6b012170 a2=3 a3=0 items=0 ppid=1 pid=5647 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:31:18.137188 kernel: audit: type=1300 audit(1765564278.120:896): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc6b012170 a2=3 a3=0 items=0 ppid=1 pid=5647 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:31:18.137416 kernel: audit: type=1327 audit(1765564278.120:896): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:31:18.120000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:31:18.146868 systemd-logind[1633]: New session 26 of user core. Dec 12 18:31:18.155462 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 12 18:31:18.162000 audit[5647]: USER_START pid=5647 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:31:18.170950 kernel: audit: type=1105 audit(1765564278.162:897): pid=5647 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:31:18.174000 audit[5650]: CRED_ACQ pid=5650 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:31:18.182193 kernel: audit: type=1103 audit(1765564278.174:898): pid=5650 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:31:18.803207 sshd[5650]: Connection closed by 139.178.89.65 port 36276 Dec 12 18:31:18.804697 sshd-session[5647]: pam_unix(sshd:session): session closed for user core Dec 12 18:31:18.806000 audit[5647]: USER_END pid=5647 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:31:18.818364 kernel: audit: type=1106 audit(1765564278.806:899): pid=5647 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:31:18.821136 systemd[1]: sshd@23-10.230.23.66:22-139.178.89.65:36276.service: Deactivated successfully. Dec 12 18:31:18.811000 audit[5647]: CRED_DISP pid=5647 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:31:18.832191 kernel: audit: type=1104 audit(1765564278.811:900): pid=5647 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:31:18.833590 systemd[1]: session-26.scope: Deactivated successfully. Dec 12 18:31:18.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.23.66:22-139.178.89.65:36276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:31:18.839326 systemd-logind[1633]: Session 26 logged out. Waiting for processes to exit. Dec 12 18:31:18.852563 systemd-logind[1633]: Removed session 26. Dec 12 18:31:22.498037 kubelet[2966]: E1212 18:31:22.497962 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-557c4cb5b5-4c5jw" podUID="d20e72ff-82d7-435d-b1ff-5952de8d6823"