Dec 16 15:25:36.336954 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:17:57 -00 2025 Dec 16 15:25:36.337001 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 16 15:25:36.337016 kernel: BIOS-provided physical RAM map: Dec 16 15:25:36.337028 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 16 15:25:36.337044 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 16 15:25:36.337056 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 16 15:25:36.337069 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Dec 16 15:25:36.337087 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Dec 16 15:25:36.337099 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 16 15:25:36.337111 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 16 15:25:36.337123 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 15:25:36.337134 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 16 15:25:36.337164 kernel: NX (Execute Disable) protection: active Dec 16 15:25:36.337176 kernel: APIC: Static calls initialized Dec 16 15:25:36.337190 kernel: SMBIOS 2.8 present. Dec 16 15:25:36.337203 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Dec 16 15:25:36.337216 kernel: DMI: Memory slots populated: 1/1 Dec 16 15:25:36.337233 kernel: Hypervisor detected: KVM Dec 16 15:25:36.337245 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 16 15:25:36.337258 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 15:25:36.337270 kernel: kvm-clock: using sched offset of 5094283685 cycles Dec 16 15:25:36.338581 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 15:25:36.338597 kernel: tsc: Detected 2500.032 MHz processor Dec 16 15:25:36.338611 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 15:25:36.338625 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 15:25:36.338646 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 16 15:25:36.338659 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 16 15:25:36.338673 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 15:25:36.338686 kernel: Using GB pages for direct mapping Dec 16 15:25:36.338699 kernel: ACPI: Early table checksum verification disabled Dec 16 15:25:36.338713 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Dec 16 15:25:36.338726 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 15:25:36.338740 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 15:25:36.338757 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 15:25:36.338770 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Dec 16 15:25:36.338783 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 15:25:36.338797 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 15:25:36.338810 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 15:25:36.338833 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 15:25:36.338853 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Dec 16 15:25:36.338871 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Dec 16 15:25:36.338885 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Dec 16 15:25:36.338899 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Dec 16 15:25:36.338913 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Dec 16 15:25:36.338930 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Dec 16 15:25:36.338944 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Dec 16 15:25:36.338957 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 16 15:25:36.338971 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 16 15:25:36.338985 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Dec 16 15:25:36.338998 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Dec 16 15:25:36.339012 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Dec 16 15:25:36.339039 kernel: Zone ranges: Dec 16 15:25:36.339053 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 15:25:36.339066 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Dec 16 15:25:36.339080 kernel: Normal empty Dec 16 15:25:36.339093 kernel: Device empty Dec 16 15:25:36.339107 kernel: Movable zone start for each node Dec 16 15:25:36.339121 kernel: Early memory node ranges Dec 16 15:25:36.339134 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 16 15:25:36.339168 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Dec 16 15:25:36.339182 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Dec 16 15:25:36.339196 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 15:25:36.339209 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 16 15:25:36.339223 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Dec 16 15:25:36.339236 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 16 15:25:36.339254 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 15:25:36.339277 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 15:25:36.339292 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 16 15:25:36.339305 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 15:25:36.339319 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 15:25:36.339333 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 15:25:36.339346 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 15:25:36.339360 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 15:25:36.339383 kernel: TSC deadline timer available Dec 16 15:25:36.339397 kernel: CPU topo: Max. logical packages: 16 Dec 16 15:25:36.339411 kernel: CPU topo: Max. logical dies: 16 Dec 16 15:25:36.339424 kernel: CPU topo: Max. dies per package: 1 Dec 16 15:25:36.339438 kernel: CPU topo: Max. threads per core: 1 Dec 16 15:25:36.339452 kernel: CPU topo: Num. cores per package: 1 Dec 16 15:25:36.339465 kernel: CPU topo: Num. threads per package: 1 Dec 16 15:25:36.339479 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Dec 16 15:25:36.339501 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 15:25:36.339530 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 16 15:25:36.339559 kernel: Booting paravirtualized kernel on KVM Dec 16 15:25:36.339573 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 15:25:36.339587 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Dec 16 15:25:36.339601 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Dec 16 15:25:36.339614 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Dec 16 15:25:36.339641 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 16 15:25:36.339655 kernel: kvm-guest: PV spinlocks enabled Dec 16 15:25:36.339669 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 15:25:36.339684 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 16 15:25:36.339698 kernel: random: crng init done Dec 16 15:25:36.339712 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 15:25:36.339725 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 15:25:36.339750 kernel: Fallback order for Node 0: 0 Dec 16 15:25:36.339764 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Dec 16 15:25:36.339777 kernel: Policy zone: DMA32 Dec 16 15:25:36.339791 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 15:25:36.339805 kernel: software IO TLB: area num 16. Dec 16 15:25:36.339818 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 16 15:25:36.339832 kernel: Kernel/User page tables isolation: enabled Dec 16 15:25:36.339856 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 15:25:36.339870 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 15:25:36.339884 kernel: Dynamic Preempt: voluntary Dec 16 15:25:36.339897 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 15:25:36.339912 kernel: rcu: RCU event tracing is enabled. Dec 16 15:25:36.339926 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 16 15:25:36.339940 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 15:25:36.339964 kernel: Rude variant of Tasks RCU enabled. Dec 16 15:25:36.339978 kernel: Tracing variant of Tasks RCU enabled. Dec 16 15:25:36.339992 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 15:25:36.340006 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 16 15:25:36.340019 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 16 15:25:36.340033 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 16 15:25:36.340047 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 16 15:25:36.340070 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Dec 16 15:25:36.340085 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 15:25:36.340120 kernel: Console: colour VGA+ 80x25 Dec 16 15:25:36.340151 kernel: printk: legacy console [tty0] enabled Dec 16 15:25:36.340166 kernel: printk: legacy console [ttyS0] enabled Dec 16 15:25:36.340185 kernel: ACPI: Core revision 20240827 Dec 16 15:25:36.340199 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 15:25:36.340214 kernel: x2apic enabled Dec 16 15:25:36.340228 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 15:25:36.340243 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240957bf147, max_idle_ns: 440795216753 ns Dec 16 15:25:36.340268 kernel: Calibrating delay loop (skipped) preset value.. 5000.06 BogoMIPS (lpj=2500032) Dec 16 15:25:36.340283 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 15:25:36.340297 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 16 15:25:36.340311 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 16 15:25:36.340334 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 15:25:36.340348 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 15:25:36.340362 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 15:25:36.340376 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 16 15:25:36.340389 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 15:25:36.340403 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 15:25:36.340417 kernel: MDS: Mitigation: Clear CPU buffers Dec 16 15:25:36.340430 kernel: MMIO Stale Data: Unknown: No mitigations Dec 16 15:25:36.340444 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 16 15:25:36.340458 kernel: active return thunk: its_return_thunk Dec 16 15:25:36.340471 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 16 15:25:36.340496 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 15:25:36.340548 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 15:25:36.340568 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 15:25:36.340582 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 15:25:36.340596 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 16 15:25:36.340610 kernel: Freeing SMP alternatives memory: 32K Dec 16 15:25:36.340623 kernel: pid_max: default: 32768 minimum: 301 Dec 16 15:25:36.340637 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 15:25:36.340651 kernel: landlock: Up and running. Dec 16 15:25:36.340678 kernel: SELinux: Initializing. Dec 16 15:25:36.340693 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 15:25:36.340707 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 15:25:36.340721 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Dec 16 15:25:36.340735 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Dec 16 15:25:36.340749 kernel: signal: max sigframe size: 1776 Dec 16 15:25:36.340763 kernel: rcu: Hierarchical SRCU implementation. Dec 16 15:25:36.340778 kernel: rcu: Max phase no-delay instances is 400. Dec 16 15:25:36.340792 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Dec 16 15:25:36.340817 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 16 15:25:36.340833 kernel: smp: Bringing up secondary CPUs ... Dec 16 15:25:36.340847 kernel: smpboot: x86: Booting SMP configuration: Dec 16 15:25:36.340861 kernel: .... node #0, CPUs: #1 Dec 16 15:25:36.340875 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 15:25:36.340889 kernel: smpboot: Total of 2 processors activated (10000.12 BogoMIPS) Dec 16 15:25:36.340904 kernel: Memory: 1914108K/2096616K available (14336K kernel code, 2444K rwdata, 29892K rodata, 15464K init, 2576K bss, 176492K reserved, 0K cma-reserved) Dec 16 15:25:36.340929 kernel: devtmpfs: initialized Dec 16 15:25:36.340944 kernel: x86/mm: Memory block size: 128MB Dec 16 15:25:36.340958 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 15:25:36.340972 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 16 15:25:36.340986 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 15:25:36.341001 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 15:25:36.341015 kernel: audit: initializing netlink subsys (disabled) Dec 16 15:25:36.341039 kernel: audit: type=2000 audit(1765898732.111:1): state=initialized audit_enabled=0 res=1 Dec 16 15:25:36.341054 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 15:25:36.341068 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 15:25:36.341082 kernel: cpuidle: using governor menu Dec 16 15:25:36.341097 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 15:25:36.341111 kernel: dca service started, version 1.12.1 Dec 16 15:25:36.341130 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Dec 16 15:25:36.341163 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 16 15:25:36.341178 kernel: PCI: Using configuration type 1 for base access Dec 16 15:25:36.341193 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 15:25:36.341207 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 15:25:36.341221 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 15:25:36.341235 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 15:25:36.341250 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 15:25:36.341274 kernel: ACPI: Added _OSI(Module Device) Dec 16 15:25:36.341289 kernel: ACPI: Added _OSI(Processor Device) Dec 16 15:25:36.341303 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 15:25:36.341317 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 15:25:36.341331 kernel: ACPI: Interpreter enabled Dec 16 15:25:36.341345 kernel: ACPI: PM: (supports S0 S5) Dec 16 15:25:36.341359 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 15:25:36.341384 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 15:25:36.341399 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 15:25:36.341413 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 16 15:25:36.341427 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 15:25:36.343498 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 15:25:36.344705 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 16 15:25:36.344960 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 16 15:25:36.344984 kernel: PCI host bridge to bus 0000:00 Dec 16 15:25:36.345239 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 15:25:36.345450 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 15:25:36.347726 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 15:25:36.347941 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 16 15:25:36.348217 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 16 15:25:36.348424 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Dec 16 15:25:36.348648 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 15:25:36.348913 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 16 15:25:36.349182 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Dec 16 15:25:36.349427 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Dec 16 15:25:36.353106 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Dec 16 15:25:36.353378 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Dec 16 15:25:36.353624 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 15:25:36.353863 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 15:25:36.354085 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Dec 16 15:25:36.354363 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 16 15:25:36.354618 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 16 15:25:36.354840 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 15:25:36.355073 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 15:25:36.355435 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Dec 16 15:25:36.357818 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 16 15:25:36.358100 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 16 15:25:36.358341 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 15:25:36.366402 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 15:25:36.366698 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Dec 16 15:25:36.366927 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 16 15:25:36.367166 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 16 15:25:36.367412 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 15:25:36.367686 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 15:25:36.367926 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Dec 16 15:25:36.368160 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 16 15:25:36.373178 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 16 15:25:36.373424 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 15:25:36.373734 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 15:25:36.373972 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Dec 16 15:25:36.374209 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 16 15:25:36.374430 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 16 15:25:36.374722 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 15:25:36.374958 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 15:25:36.375215 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Dec 16 15:25:36.375435 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 16 15:25:36.375696 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 16 15:25:36.375919 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 15:25:36.376219 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 15:25:36.376460 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Dec 16 15:25:36.376706 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 16 15:25:36.376926 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 16 15:25:36.377180 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 15:25:36.377414 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 15:25:36.377703 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Dec 16 15:25:36.377924 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 16 15:25:36.378154 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 16 15:25:36.378377 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 15:25:36.378679 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 16 15:25:36.378906 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Dec 16 15:25:36.379160 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Dec 16 15:25:36.379382 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Dec 16 15:25:36.379638 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Dec 16 15:25:36.379873 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 16 15:25:36.380092 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Dec 16 15:25:36.380328 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Dec 16 15:25:36.380600 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Dec 16 15:25:36.380836 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 16 15:25:36.381057 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 16 15:25:36.381316 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 16 15:25:36.381554 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Dec 16 15:25:36.381777 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Dec 16 15:25:36.382033 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 16 15:25:36.382267 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Dec 16 15:25:36.382505 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Dec 16 15:25:36.382763 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Dec 16 15:25:36.382988 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 16 15:25:36.383245 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 16 15:25:36.383469 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 16 15:25:36.383733 kernel: pci_bus 0000:02: extended config space not accessible Dec 16 15:25:36.383990 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Dec 16 15:25:36.384237 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Dec 16 15:25:36.384461 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 16 15:25:36.384760 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Dec 16 15:25:36.384985 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Dec 16 15:25:36.385221 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 16 15:25:36.385460 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Dec 16 15:25:36.385707 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Dec 16 15:25:36.385948 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 16 15:25:36.386199 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 16 15:25:36.386421 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 16 15:25:36.386673 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 16 15:25:36.386893 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 16 15:25:36.387111 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 16 15:25:36.387158 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 15:25:36.387174 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 15:25:36.387188 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 15:25:36.387203 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 15:25:36.387218 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 16 15:25:36.387239 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 16 15:25:36.387254 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 16 15:25:36.387279 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 16 15:25:36.387294 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 16 15:25:36.387309 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 16 15:25:36.387323 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 16 15:25:36.387337 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 16 15:25:36.387351 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 16 15:25:36.387365 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 16 15:25:36.387389 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 16 15:25:36.387405 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 16 15:25:36.387419 kernel: iommu: Default domain type: Translated Dec 16 15:25:36.387434 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 15:25:36.387448 kernel: PCI: Using ACPI for IRQ routing Dec 16 15:25:36.387463 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 15:25:36.387477 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 16 15:25:36.387502 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Dec 16 15:25:36.387741 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 16 15:25:36.387959 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 16 15:25:36.388190 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 15:25:36.388211 kernel: vgaarb: loaded Dec 16 15:25:36.388226 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 15:25:36.388240 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 15:25:36.388271 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 15:25:36.388286 kernel: pnp: PnP ACPI init Dec 16 15:25:36.388551 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 16 15:25:36.388574 kernel: pnp: PnP ACPI: found 5 devices Dec 16 15:25:36.388589 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 15:25:36.388604 kernel: NET: Registered PF_INET protocol family Dec 16 15:25:36.388619 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 15:25:36.388649 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 16 15:25:36.388664 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 15:25:36.388679 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 15:25:36.388693 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 16 15:25:36.388708 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 16 15:25:36.388722 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 15:25:36.388747 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 15:25:36.388762 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 15:25:36.388777 kernel: NET: Registered PF_XDP protocol family Dec 16 15:25:36.388994 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Dec 16 15:25:36.389230 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 16 15:25:36.389450 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 16 15:25:36.389700 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 16 15:25:36.389937 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 16 15:25:36.390172 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 16 15:25:36.390393 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 16 15:25:36.390629 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 16 15:25:36.390849 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Dec 16 15:25:36.391072 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Dec 16 15:25:36.391304 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Dec 16 15:25:36.391570 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Dec 16 15:25:36.391792 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Dec 16 15:25:36.392034 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Dec 16 15:25:36.392267 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Dec 16 15:25:36.392485 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Dec 16 15:25:36.392726 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 16 15:25:36.393010 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 16 15:25:36.393243 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 16 15:25:36.393472 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 16 15:25:36.393725 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 16 15:25:36.393954 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 15:25:36.394200 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 16 15:25:36.394421 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 16 15:25:36.394679 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 16 15:25:36.394898 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 15:25:36.395116 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 16 15:25:36.395348 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 16 15:25:36.395598 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 16 15:25:36.395836 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 15:25:36.396056 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 16 15:25:36.396288 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 16 15:25:36.396506 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 16 15:25:36.396743 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 15:25:36.396981 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 16 15:25:36.397216 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 16 15:25:36.397436 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 16 15:25:36.397687 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 15:25:36.397908 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 16 15:25:36.398127 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 16 15:25:36.398374 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 16 15:25:36.398613 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 15:25:36.398834 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 16 15:25:36.399064 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 16 15:25:36.399308 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 16 15:25:36.399543 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 15:25:36.399764 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 16 15:25:36.399982 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 16 15:25:36.400233 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 16 15:25:36.400453 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 15:25:36.400712 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 15:25:36.400926 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 15:25:36.401159 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 15:25:36.401363 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 16 15:25:36.401603 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 16 15:25:36.401806 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Dec 16 15:25:36.402039 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 16 15:25:36.402262 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Dec 16 15:25:36.402470 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 15:25:36.402724 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 16 15:25:36.402964 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Dec 16 15:25:36.403187 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 16 15:25:36.403395 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 15:25:36.403634 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Dec 16 15:25:36.403844 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 16 15:25:36.404070 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 15:25:36.404303 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Dec 16 15:25:36.404524 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 16 15:25:36.404751 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 15:25:36.404977 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Dec 16 15:25:36.405198 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 16 15:25:36.405425 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 15:25:36.405666 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Dec 16 15:25:36.405875 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 16 15:25:36.406082 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 15:25:36.406315 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Dec 16 15:25:36.406550 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Dec 16 15:25:36.406785 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 15:25:36.407008 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Dec 16 15:25:36.407234 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 16 15:25:36.407443 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 15:25:36.407466 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 16 15:25:36.407482 kernel: PCI: CLS 0 bytes, default 64 Dec 16 15:25:36.407528 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 16 15:25:36.407546 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Dec 16 15:25:36.407561 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 16 15:25:36.407577 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240957bf147, max_idle_ns: 440795216753 ns Dec 16 15:25:36.407592 kernel: Initialise system trusted keyrings Dec 16 15:25:36.407608 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 16 15:25:36.407623 kernel: Key type asymmetric registered Dec 16 15:25:36.407651 kernel: Asymmetric key parser 'x509' registered Dec 16 15:25:36.407667 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 15:25:36.407682 kernel: io scheduler mq-deadline registered Dec 16 15:25:36.407698 kernel: io scheduler kyber registered Dec 16 15:25:36.407713 kernel: io scheduler bfq registered Dec 16 15:25:36.407940 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 16 15:25:36.408175 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 16 15:25:36.408416 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 15:25:36.408670 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 16 15:25:36.408892 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 16 15:25:36.409112 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 15:25:36.409348 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 16 15:25:36.409608 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 16 15:25:36.409829 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 15:25:36.410052 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 16 15:25:36.410307 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 16 15:25:36.410557 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 15:25:36.410805 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 16 15:25:36.411028 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 16 15:25:36.411263 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 15:25:36.411485 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 16 15:25:36.411723 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 16 15:25:36.411961 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 15:25:36.412197 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 16 15:25:36.412418 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 16 15:25:36.412669 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 15:25:36.412891 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 16 15:25:36.413145 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 16 15:25:36.413370 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 15:25:36.413393 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 15:25:36.413409 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 16 15:25:36.413425 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 16 15:25:36.413441 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 15:25:36.413472 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 15:25:36.413488 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 15:25:36.413503 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 15:25:36.413541 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 15:25:36.413787 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 16 15:25:36.413812 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 15:25:36.414042 kernel: rtc_cmos 00:03: registered as rtc0 Dec 16 15:25:36.414270 kernel: rtc_cmos 00:03: setting system clock to 2025-12-16T15:25:34 UTC (1765898734) Dec 16 15:25:36.414482 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 16 15:25:36.414504 kernel: intel_pstate: CPU model not supported Dec 16 15:25:36.414546 kernel: NET: Registered PF_INET6 protocol family Dec 16 15:25:36.414563 kernel: Segment Routing with IPv6 Dec 16 15:25:36.414578 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 15:25:36.414607 kernel: NET: Registered PF_PACKET protocol family Dec 16 15:25:36.414623 kernel: Key type dns_resolver registered Dec 16 15:25:36.414638 kernel: IPI shorthand broadcast: enabled Dec 16 15:25:36.414654 kernel: sched_clock: Marking stable (2118003544, 226133134)->(2595325358, -251188680) Dec 16 15:25:36.414669 kernel: registered taskstats version 1 Dec 16 15:25:36.414684 kernel: Loading compiled-in X.509 certificates Dec 16 15:25:36.414700 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: b90706f42f055ab9f35fc8fc29156d877adb12c4' Dec 16 15:25:36.414725 kernel: Demotion targets for Node 0: null Dec 16 15:25:36.414742 kernel: Key type .fscrypt registered Dec 16 15:25:36.414757 kernel: Key type fscrypt-provisioning registered Dec 16 15:25:36.414772 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 15:25:36.414787 kernel: ima: Allocated hash algorithm: sha1 Dec 16 15:25:36.414802 kernel: ima: No architecture policies found Dec 16 15:25:36.414817 kernel: clk: Disabling unused clocks Dec 16 15:25:36.414832 kernel: Freeing unused kernel image (initmem) memory: 15464K Dec 16 15:25:36.414859 kernel: Write protecting the kernel read-only data: 45056k Dec 16 15:25:36.414874 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Dec 16 15:25:36.414889 kernel: Run /init as init process Dec 16 15:25:36.414904 kernel: with arguments: Dec 16 15:25:36.414919 kernel: /init Dec 16 15:25:36.414934 kernel: with environment: Dec 16 15:25:36.414949 kernel: HOME=/ Dec 16 15:25:36.414974 kernel: TERM=linux Dec 16 15:25:36.414989 kernel: ACPI: bus type USB registered Dec 16 15:25:36.415005 kernel: usbcore: registered new interface driver usbfs Dec 16 15:25:36.415020 kernel: usbcore: registered new interface driver hub Dec 16 15:25:36.415035 kernel: usbcore: registered new device driver usb Dec 16 15:25:36.415284 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 16 15:25:36.415527 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Dec 16 15:25:36.415772 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 16 15:25:36.415997 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 16 15:25:36.416237 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Dec 16 15:25:36.416465 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Dec 16 15:25:36.416766 kernel: hub 1-0:1.0: USB hub found Dec 16 15:25:36.417008 kernel: hub 1-0:1.0: 4 ports detected Dec 16 15:25:36.417307 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 16 15:25:36.417605 kernel: hub 2-0:1.0: USB hub found Dec 16 15:25:36.417850 kernel: hub 2-0:1.0: 4 ports detected Dec 16 15:25:36.417872 kernel: SCSI subsystem initialized Dec 16 15:25:36.417888 kernel: libata version 3.00 loaded. Dec 16 15:25:36.418132 kernel: ahci 0000:00:1f.2: version 3.0 Dec 16 15:25:36.418170 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 16 15:25:36.418389 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 16 15:25:36.418628 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 16 15:25:36.418849 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 16 15:25:36.419106 kernel: scsi host0: ahci Dec 16 15:25:36.419374 kernel: scsi host1: ahci Dec 16 15:25:36.419642 kernel: scsi host2: ahci Dec 16 15:25:36.419885 kernel: scsi host3: ahci Dec 16 15:25:36.420119 kernel: scsi host4: ahci Dec 16 15:25:36.420383 kernel: scsi host5: ahci Dec 16 15:25:36.420421 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 35 lpm-pol 1 Dec 16 15:25:36.420437 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 35 lpm-pol 1 Dec 16 15:25:36.420453 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 35 lpm-pol 1 Dec 16 15:25:36.420468 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 35 lpm-pol 1 Dec 16 15:25:36.420484 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 35 lpm-pol 1 Dec 16 15:25:36.420499 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 35 lpm-pol 1 Dec 16 15:25:36.420787 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 16 15:25:36.420827 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 16 15:25:36.420843 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 15:25:36.420858 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 16 15:25:36.420873 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 16 15:25:36.420888 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 16 15:25:36.420902 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 16 15:25:36.420928 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 16 15:25:36.421193 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Dec 16 15:25:36.421217 kernel: usbcore: registered new interface driver usbhid Dec 16 15:25:36.421431 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 16 15:25:36.421454 kernel: usbhid: USB HID core driver Dec 16 15:25:36.421469 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 15:25:36.421500 kernel: GPT:25804799 != 125829119 Dec 16 15:25:36.421543 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 15:25:36.421560 kernel: GPT:25804799 != 125829119 Dec 16 15:25:36.421575 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 15:25:36.421589 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 15:25:36.421605 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 16 15:25:36.421884 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Dec 16 15:25:36.421925 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 15:25:36.421941 kernel: device-mapper: uevent: version 1.0.3 Dec 16 15:25:36.421956 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 15:25:36.421972 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 16 15:25:36.421987 kernel: raid6: sse2x4 gen() 13354 MB/s Dec 16 15:25:36.422002 kernel: raid6: sse2x2 gen() 9234 MB/s Dec 16 15:25:36.422017 kernel: raid6: sse2x1 gen() 9752 MB/s Dec 16 15:25:36.422044 kernel: raid6: using algorithm sse2x4 gen() 13354 MB/s Dec 16 15:25:36.422059 kernel: raid6: .... xor() 7660 MB/s, rmw enabled Dec 16 15:25:36.422074 kernel: raid6: using ssse3x2 recovery algorithm Dec 16 15:25:36.422089 kernel: xor: automatically using best checksumming function avx Dec 16 15:25:36.422104 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 15:25:36.422119 kernel: BTRFS: device fsid ea73a94a-fb20-4d45-8448-4c6f4c422a4f devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (194) Dec 16 15:25:36.422143 kernel: BTRFS info (device dm-0): first mount of filesystem ea73a94a-fb20-4d45-8448-4c6f4c422a4f Dec 16 15:25:36.422171 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 15:25:36.422187 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 15:25:36.422202 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 15:25:36.422217 kernel: loop: module loaded Dec 16 15:25:36.422232 kernel: loop0: detected capacity change from 0 to 100136 Dec 16 15:25:36.422247 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 15:25:36.422269 systemd[1]: Successfully made /usr/ read-only. Dec 16 15:25:36.422301 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 15:25:36.422318 systemd[1]: Detected virtualization kvm. Dec 16 15:25:36.422334 systemd[1]: Detected architecture x86-64. Dec 16 15:25:36.422349 systemd[1]: Running in initrd. Dec 16 15:25:36.422365 systemd[1]: No hostname configured, using default hostname. Dec 16 15:25:36.422392 systemd[1]: Hostname set to . Dec 16 15:25:36.422408 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 15:25:36.422424 systemd[1]: Queued start job for default target initrd.target. Dec 16 15:25:36.422440 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 15:25:36.422456 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 15:25:36.422472 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 15:25:36.422488 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 15:25:36.422532 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 15:25:36.422551 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 15:25:36.422568 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 15:25:36.422584 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 15:25:36.422601 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 15:25:36.422630 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 15:25:36.422647 systemd[1]: Reached target paths.target - Path Units. Dec 16 15:25:36.422663 systemd[1]: Reached target slices.target - Slice Units. Dec 16 15:25:36.422679 systemd[1]: Reached target swap.target - Swaps. Dec 16 15:25:36.422695 systemd[1]: Reached target timers.target - Timer Units. Dec 16 15:25:36.422711 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 15:25:36.422727 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 15:25:36.422754 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 15:25:36.422771 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 15:25:36.422788 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 15:25:36.422804 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 15:25:36.422820 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 15:25:36.422836 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 15:25:36.422852 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 15:25:36.422879 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 15:25:36.422896 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 15:25:36.422912 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 15:25:36.422928 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 15:25:36.422945 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 15:25:36.422962 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 15:25:36.422978 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 15:25:36.423005 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 15:25:36.423022 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 15:25:36.423038 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 15:25:36.423064 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 15:25:36.423082 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 15:25:36.423098 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 15:25:36.423176 systemd-journald[330]: Collecting audit messages is enabled. Dec 16 15:25:36.423225 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 15:25:36.423242 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 15:25:36.423259 kernel: audit: type=1130 audit(1765898736.355:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:36.423274 kernel: Bridge firewalling registered Dec 16 15:25:36.423290 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 15:25:36.423308 systemd-journald[330]: Journal started Dec 16 15:25:36.423346 systemd-journald[330]: Runtime Journal (/run/log/journal/735048b69ff1420a9a9ea3f2248f5045) is 4.7M, max 37.8M, 33M free. Dec 16 15:25:36.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:36.367639 systemd-modules-load[333]: Inserted module 'br_netfilter' Dec 16 15:25:36.446259 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 15:25:36.446304 kernel: audit: type=1130 audit(1765898736.444:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:36.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:36.450888 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 15:25:36.464337 kernel: audit: type=1130 audit(1765898736.452:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:36.464376 kernel: audit: type=1130 audit(1765898736.458:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:36.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:36.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:36.458595 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 15:25:36.465840 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 15:25:36.468694 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 15:25:36.478116 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 15:25:36.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:36.484351 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 15:25:36.491272 kernel: audit: type=1130 audit(1765898736.484:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:36.500363 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 15:25:36.507621 kernel: audit: type=1130 audit(1765898736.501:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:36.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:36.503954 systemd-tmpfiles[354]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 15:25:36.512365 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 15:25:36.509000 audit: BPF prog-id=6 op=LOAD Dec 16 15:25:36.517752 kernel: audit: type=1334 audit(1765898736.509:8): prog-id=6 op=LOAD Dec 16 15:25:36.517966 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 15:25:36.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:36.525567 kernel: audit: type=1130 audit(1765898736.519:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:36.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:36.530862 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 15:25:36.538458 kernel: audit: type=1130 audit(1765898736.531:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:36.533737 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 15:25:36.570820 dracut-cmdline[373]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 16 15:25:36.596928 systemd-resolved[367]: Positive Trust Anchors: Dec 16 15:25:36.596953 systemd-resolved[367]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 15:25:36.596960 systemd-resolved[367]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 15:25:36.597016 systemd-resolved[367]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 15:25:36.639409 systemd-resolved[367]: Defaulting to hostname 'linux'. Dec 16 15:25:36.641765 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 15:25:36.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:36.643620 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 15:25:36.727567 kernel: Loading iSCSI transport class v2.0-870. Dec 16 15:25:36.745551 kernel: iscsi: registered transport (tcp) Dec 16 15:25:36.776088 kernel: iscsi: registered transport (qla4xxx) Dec 16 15:25:36.776190 kernel: QLogic iSCSI HBA Driver Dec 16 15:25:36.811301 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 15:25:36.836637 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 15:25:36.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:36.838452 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 15:25:36.908248 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 15:25:36.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:36.911943 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 15:25:36.913713 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 15:25:36.964227 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 15:25:36.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:36.965000 audit: BPF prog-id=7 op=LOAD Dec 16 15:25:36.965000 audit: BPF prog-id=8 op=LOAD Dec 16 15:25:36.967732 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 15:25:37.003134 systemd-udevd[612]: Using default interface naming scheme 'v257'. Dec 16 15:25:37.019650 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 15:25:37.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:37.024424 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 15:25:37.061833 dracut-pre-trigger[678]: rd.md=0: removing MD RAID activation Dec 16 15:25:37.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:37.064594 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 15:25:37.066000 audit: BPF prog-id=9 op=LOAD Dec 16 15:25:37.068769 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 15:25:37.105615 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 15:25:37.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:37.110742 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 15:25:37.134085 systemd-networkd[721]: lo: Link UP Dec 16 15:25:37.134098 systemd-networkd[721]: lo: Gained carrier Dec 16 15:25:37.137803 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 15:25:37.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:37.138729 systemd[1]: Reached target network.target - Network. Dec 16 15:25:37.274275 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 15:25:37.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:37.277637 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 15:25:37.425438 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 15:25:37.439635 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 15:25:37.460541 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 15:25:37.473365 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 15:25:37.475804 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 15:25:37.507094 disk-uuid[776]: Primary Header is updated. Dec 16 15:25:37.507094 disk-uuid[776]: Secondary Entries is updated. Dec 16 15:25:37.507094 disk-uuid[776]: Secondary Header is updated. Dec 16 15:25:37.524599 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 15:25:37.577168 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 15:25:37.577361 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 15:25:37.587906 kernel: kauditd_printk_skb: 12 callbacks suppressed Dec 16 15:25:37.587944 kernel: audit: type=1131 audit(1765898737.578:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:37.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:37.585653 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 15:25:37.591277 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 15:25:37.602308 kernel: AES CTR mode by8 optimization enabled Dec 16 15:25:37.630207 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 16 15:25:37.648918 systemd-networkd[721]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 15:25:37.650429 systemd-networkd[721]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 15:25:37.653365 systemd-networkd[721]: eth0: Link UP Dec 16 15:25:37.653773 systemd-networkd[721]: eth0: Gained carrier Dec 16 15:25:37.653794 systemd-networkd[721]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 15:25:37.668639 systemd-networkd[721]: eth0: DHCPv4 address 10.230.25.166/30, gateway 10.230.25.165 acquired from 10.230.25.165 Dec 16 15:25:37.743134 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 15:25:37.751938 kernel: audit: type=1130 audit(1765898737.743:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:37.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:37.814257 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 15:25:37.820922 kernel: audit: type=1130 audit(1765898737.814:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:37.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:37.815769 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 15:25:37.821664 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 15:25:37.823319 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 15:25:37.826442 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 15:25:37.860908 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 15:25:37.867326 kernel: audit: type=1130 audit(1765898737.861:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:37.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:38.564090 disk-uuid[777]: Warning: The kernel is still using the old partition table. Dec 16 15:25:38.564090 disk-uuid[777]: The new table will be used at the next reboot or after you Dec 16 15:25:38.564090 disk-uuid[777]: run partprobe(8) or kpartx(8) Dec 16 15:25:38.564090 disk-uuid[777]: The operation has completed successfully. Dec 16 15:25:38.574314 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 15:25:38.574548 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 15:25:38.586266 kernel: audit: type=1130 audit(1765898738.575:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:38.586307 kernel: audit: type=1131 audit(1765898738.575:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:38.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:38.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:38.579695 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 15:25:38.622555 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (863) Dec 16 15:25:38.631743 kernel: BTRFS info (device vda6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 15:25:38.631827 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 15:25:38.639164 kernel: BTRFS info (device vda6): turning on async discard Dec 16 15:25:38.639259 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 15:25:38.649602 kernel: BTRFS info (device vda6): last unmount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 15:25:38.651456 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 15:25:38.657888 kernel: audit: type=1130 audit(1765898738.651:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:38.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:38.654738 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 15:25:38.892220 ignition[882]: Ignition 2.22.0 Dec 16 15:25:38.892246 ignition[882]: Stage: fetch-offline Dec 16 15:25:38.892365 ignition[882]: no configs at "/usr/lib/ignition/base.d" Dec 16 15:25:38.892387 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 15:25:38.902784 kernel: audit: type=1130 audit(1765898738.896:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:38.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:38.895904 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 15:25:38.892634 ignition[882]: parsed url from cmdline: "" Dec 16 15:25:38.899019 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 15:25:38.892641 ignition[882]: no config URL provided Dec 16 15:25:38.892657 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 15:25:38.892677 ignition[882]: no config at "/usr/lib/ignition/user.ign" Dec 16 15:25:38.892692 ignition[882]: failed to fetch config: resource requires networking Dec 16 15:25:38.893158 ignition[882]: Ignition finished successfully Dec 16 15:25:38.940804 ignition[890]: Ignition 2.22.0 Dec 16 15:25:38.941592 ignition[890]: Stage: fetch Dec 16 15:25:38.941852 ignition[890]: no configs at "/usr/lib/ignition/base.d" Dec 16 15:25:38.941870 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 15:25:38.942010 ignition[890]: parsed url from cmdline: "" Dec 16 15:25:38.942017 ignition[890]: no config URL provided Dec 16 15:25:38.942028 ignition[890]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 15:25:38.942042 ignition[890]: no config at "/usr/lib/ignition/user.ign" Dec 16 15:25:38.942252 ignition[890]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 16 15:25:38.943766 ignition[890]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 16 15:25:38.943820 ignition[890]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 16 15:25:38.958990 ignition[890]: GET result: OK Dec 16 15:25:38.959753 ignition[890]: parsing config with SHA512: 5d4ea3d522d7274052ea3bf6458a017f3df564c75a09e86a805090bef0a03570ff0283c26956dc81a5051da21ab64d4b07897385250999522a27b17c03bf2e27 Dec 16 15:25:38.971939 unknown[890]: fetched base config from "system" Dec 16 15:25:38.971962 unknown[890]: fetched base config from "system" Dec 16 15:25:38.972449 ignition[890]: fetch: fetch complete Dec 16 15:25:38.971972 unknown[890]: fetched user config from "openstack" Dec 16 15:25:38.972458 ignition[890]: fetch: fetch passed Dec 16 15:25:38.975240 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 15:25:38.981933 kernel: audit: type=1130 audit(1765898738.975:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:38.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:38.972670 ignition[890]: Ignition finished successfully Dec 16 15:25:38.979725 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 15:25:39.035714 ignition[897]: Ignition 2.22.0 Dec 16 15:25:39.035739 ignition[897]: Stage: kargs Dec 16 15:25:39.035932 ignition[897]: no configs at "/usr/lib/ignition/base.d" Dec 16 15:25:39.035950 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 15:25:39.036987 ignition[897]: kargs: kargs passed Dec 16 15:25:39.046375 kernel: audit: type=1130 audit(1765898739.040:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:39.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:39.040032 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 15:25:39.037073 ignition[897]: Ignition finished successfully Dec 16 15:25:39.043745 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 15:25:39.082384 ignition[903]: Ignition 2.22.0 Dec 16 15:25:39.082409 ignition[903]: Stage: disks Dec 16 15:25:39.082628 ignition[903]: no configs at "/usr/lib/ignition/base.d" Dec 16 15:25:39.082646 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 15:25:39.083870 ignition[903]: disks: disks passed Dec 16 15:25:39.083944 ignition[903]: Ignition finished successfully Dec 16 15:25:39.087684 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 15:25:39.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:39.089096 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 15:25:39.090149 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 15:25:39.091862 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 15:25:39.093587 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 15:25:39.095027 systemd[1]: Reached target basic.target - Basic System. Dec 16 15:25:39.098095 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 15:25:39.143252 systemd-fsck[911]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Dec 16 15:25:39.147856 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 15:25:39.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:39.150255 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 15:25:39.295530 kernel: EXT4-fs (vda9): mounted filesystem 7cac6192-738c-43cc-9341-24f71d091e91 r/w with ordered data mode. Quota mode: none. Dec 16 15:25:39.296384 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 15:25:39.297717 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 15:25:39.301067 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 15:25:39.303627 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 15:25:39.306782 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 15:25:39.307778 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Dec 16 15:25:39.311680 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 15:25:39.311729 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 15:25:39.324711 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 15:25:39.330737 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 15:25:39.344434 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (919) Dec 16 15:25:39.344471 kernel: BTRFS info (device vda6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 15:25:39.344491 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 15:25:39.349590 kernel: BTRFS info (device vda6): turning on async discard Dec 16 15:25:39.349638 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 15:25:39.353341 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 15:25:39.411545 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 15:25:39.437062 initrd-setup-root[947]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 15:25:39.444406 initrd-setup-root[954]: cut: /sysroot/etc/group: No such file or directory Dec 16 15:25:39.450885 initrd-setup-root[961]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 15:25:39.457462 initrd-setup-root[968]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 15:25:39.531680 systemd-networkd[721]: eth0: Gained IPv6LL Dec 16 15:25:39.577066 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 15:25:39.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:39.579281 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 15:25:39.581736 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 15:25:39.605564 kernel: BTRFS info (device vda6): last unmount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 15:25:39.609795 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 15:25:39.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:39.635499 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 15:25:39.648254 ignition[1037]: INFO : Ignition 2.22.0 Dec 16 15:25:39.650542 ignition[1037]: INFO : Stage: mount Dec 16 15:25:39.650542 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 15:25:39.650542 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 15:25:39.653553 ignition[1037]: INFO : mount: mount passed Dec 16 15:25:39.653553 ignition[1037]: INFO : Ignition finished successfully Dec 16 15:25:39.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:39.653883 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 15:25:40.456571 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 15:25:41.040951 systemd-networkd[721]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8669:24:19ff:fee6:19a6/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8669:24:19ff:fee6:19a6/64 assigned by NDisc. Dec 16 15:25:41.040972 systemd-networkd[721]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 16 15:25:42.463557 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 15:25:46.470549 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 15:25:46.477924 coreos-metadata[921]: Dec 16 15:25:46.477 WARN failed to locate config-drive, using the metadata service API instead Dec 16 15:25:46.505361 coreos-metadata[921]: Dec 16 15:25:46.505 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 16 15:25:46.517780 coreos-metadata[921]: Dec 16 15:25:46.517 INFO Fetch successful Dec 16 15:25:46.518671 coreos-metadata[921]: Dec 16 15:25:46.518 INFO wrote hostname srv-g2i2t.gb1.brightbox.com to /sysroot/etc/hostname Dec 16 15:25:46.520990 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 16 15:25:46.535746 kernel: kauditd_printk_skb: 5 callbacks suppressed Dec 16 15:25:46.535784 kernel: audit: type=1130 audit(1765898746.522:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:46.535826 kernel: audit: type=1131 audit(1765898746.522:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:46.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:46.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:46.521174 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Dec 16 15:25:46.526558 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 15:25:46.551170 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 15:25:46.576543 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1053) Dec 16 15:25:46.576608 kernel: BTRFS info (device vda6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 15:25:46.579790 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 15:25:46.584631 kernel: BTRFS info (device vda6): turning on async discard Dec 16 15:25:46.584687 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 15:25:46.588441 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 15:25:46.633550 ignition[1071]: INFO : Ignition 2.22.0 Dec 16 15:25:46.633550 ignition[1071]: INFO : Stage: files Dec 16 15:25:46.633550 ignition[1071]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 15:25:46.633550 ignition[1071]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 15:25:46.637252 ignition[1071]: DEBUG : files: compiled without relabeling support, skipping Dec 16 15:25:46.637252 ignition[1071]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 15:25:46.637252 ignition[1071]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 15:25:46.641174 ignition[1071]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 15:25:46.642440 ignition[1071]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 15:25:46.643461 ignition[1071]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 15:25:46.642978 unknown[1071]: wrote ssh authorized keys file for user: core Dec 16 15:25:46.645862 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 15:25:46.645862 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 16 15:25:46.899966 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 15:25:47.172603 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 15:25:47.172603 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 15:25:47.172603 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 15:25:47.172603 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 15:25:47.179612 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 15:25:47.179612 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 15:25:47.179612 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 15:25:47.179612 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 15:25:47.179612 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 15:25:47.179612 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 15:25:47.179612 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 15:25:47.179612 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 15:25:47.179612 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 15:25:47.179612 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 15:25:47.179612 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Dec 16 15:25:47.572044 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 15:25:48.817922 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 15:25:48.823635 ignition[1071]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 15:25:48.833445 ignition[1071]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 15:25:48.842311 ignition[1071]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 15:25:48.842311 ignition[1071]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 15:25:48.842311 ignition[1071]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 15:25:48.845612 ignition[1071]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 15:25:48.845612 ignition[1071]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 15:25:48.845612 ignition[1071]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 15:25:48.845612 ignition[1071]: INFO : files: files passed Dec 16 15:25:48.845612 ignition[1071]: INFO : Ignition finished successfully Dec 16 15:25:48.858844 kernel: audit: type=1130 audit(1765898748.849:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:48.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:48.846800 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 15:25:48.851906 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 15:25:48.860383 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 15:25:48.871341 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 15:25:48.872292 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 15:25:48.879608 kernel: audit: type=1130 audit(1765898748.872:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:48.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:48.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:48.885538 kernel: audit: type=1131 audit(1765898748.878:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:48.887610 initrd-setup-root-after-ignition[1102]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 15:25:48.889073 initrd-setup-root-after-ignition[1102]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 15:25:48.890321 initrd-setup-root-after-ignition[1106]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 15:25:48.893765 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 15:25:48.900833 kernel: audit: type=1130 audit(1765898748.894:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:48.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:48.894956 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 15:25:48.902983 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 15:25:48.961483 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 15:25:48.961690 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 15:25:48.974011 kernel: audit: type=1130 audit(1765898748.962:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:48.974056 kernel: audit: type=1131 audit(1765898748.962:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:48.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:48.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:48.963642 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 15:25:48.974762 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 15:25:48.982412 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 15:25:48.984708 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 15:25:49.016587 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 15:25:49.023215 kernel: audit: type=1130 audit(1765898749.016:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.020709 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 15:25:49.046013 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 15:25:49.047112 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 15:25:49.048056 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 15:25:49.049918 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 15:25:49.051496 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 15:25:49.058891 kernel: audit: type=1131 audit(1765898749.052:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.051700 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 15:25:49.058717 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 15:25:49.059674 systemd[1]: Stopped target basic.target - Basic System. Dec 16 15:25:49.061334 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 15:25:49.062836 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 15:25:49.064460 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 15:25:49.066164 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 15:25:49.068067 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 15:25:49.069578 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 15:25:49.071164 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 15:25:49.072761 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 15:25:49.074367 systemd[1]: Stopped target swap.target - Swaps. Dec 16 15:25:49.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.076000 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 15:25:49.076274 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 15:25:49.077959 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 15:25:49.078934 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 15:25:49.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.080457 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 15:25:49.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.080913 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 15:25:49.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.082174 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 15:25:49.082431 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 15:25:49.084335 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 15:25:49.084548 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 15:25:49.085695 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 15:25:49.085946 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 15:25:49.089679 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 15:25:49.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.092847 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 15:25:49.094933 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 15:25:49.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.095684 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 15:25:49.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.097709 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 15:25:49.098685 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 15:25:49.100109 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 15:25:49.100358 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 15:25:49.111421 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 15:25:49.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.112614 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 15:25:49.132678 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 15:25:49.145348 ignition[1126]: INFO : Ignition 2.22.0 Dec 16 15:25:49.145348 ignition[1126]: INFO : Stage: umount Dec 16 15:25:49.148265 ignition[1126]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 15:25:49.148265 ignition[1126]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 15:25:49.148265 ignition[1126]: INFO : umount: umount passed Dec 16 15:25:49.148265 ignition[1126]: INFO : Ignition finished successfully Dec 16 15:25:49.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.148712 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 15:25:49.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.148933 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 15:25:49.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.150799 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 15:25:49.150903 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 15:25:49.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.152174 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 15:25:49.152245 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 15:25:49.153463 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 15:25:49.153573 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 15:25:49.154830 systemd[1]: Stopped target network.target - Network. Dec 16 15:25:49.156078 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 15:25:49.156162 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 15:25:49.157606 systemd[1]: Stopped target paths.target - Path Units. Dec 16 15:25:49.158987 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 15:25:49.162606 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 15:25:49.163467 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 15:25:49.165047 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 15:25:49.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.166646 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 15:25:49.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.166741 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 15:25:49.167979 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 15:25:49.168039 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 15:25:49.169443 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 15:25:49.169491 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 15:25:49.171150 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 15:25:49.171241 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 15:25:49.172588 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 15:25:49.172668 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 15:25:49.174133 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 15:25:49.176178 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 15:25:49.189841 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 15:25:49.190055 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 15:25:49.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.193000 audit: BPF prog-id=6 op=UNLOAD Dec 16 15:25:49.193353 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 15:25:49.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.193672 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 15:25:49.201000 audit: BPF prog-id=9 op=UNLOAD Dec 16 15:25:49.202333 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 15:25:49.203146 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 15:25:49.203238 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 15:25:49.205851 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 15:25:49.207920 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 15:25:49.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.208002 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 15:25:49.210031 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 15:25:49.210107 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 15:25:49.210849 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 15:25:49.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.210916 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 15:25:49.215740 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 15:25:49.229933 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 15:25:49.230194 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 15:25:49.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.232089 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 15:25:49.232161 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 15:25:49.233287 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 15:25:49.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.233354 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 15:25:49.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.234773 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 15:25:49.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.234850 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 15:25:49.236942 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 15:25:49.237013 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 15:25:49.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.238411 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 15:25:49.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.238479 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 15:25:49.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.243159 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 15:25:49.245610 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 15:25:49.245691 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 15:25:49.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.247638 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 15:25:49.247730 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 15:25:49.248985 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 15:25:49.249062 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 15:25:49.249871 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 15:25:49.249944 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 15:25:49.255131 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 15:25:49.255240 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 15:25:49.267699 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 15:25:49.267870 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 15:25:49.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.274055 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 15:25:49.274236 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 15:25:49.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.276167 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 15:25:49.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.276307 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 15:25:49.315460 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 15:25:49.315705 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 15:25:49.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:49.317432 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 15:25:49.320736 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 15:25:49.358992 systemd[1]: Switching root. Dec 16 15:25:49.398524 systemd-journald[330]: Journal stopped Dec 16 15:25:51.027739 systemd-journald[330]: Received SIGTERM from PID 1 (systemd). Dec 16 15:25:51.027842 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 15:25:51.027875 kernel: SELinux: policy capability open_perms=1 Dec 16 15:25:51.027903 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 15:25:51.027925 kernel: SELinux: policy capability always_check_network=0 Dec 16 15:25:51.027945 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 15:25:51.027966 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 15:25:51.027997 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 15:25:51.028023 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 15:25:51.028053 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 15:25:51.028077 systemd[1]: Successfully loaded SELinux policy in 81.546ms. Dec 16 15:25:51.028102 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.830ms. Dec 16 15:25:51.028131 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 15:25:51.028154 systemd[1]: Detected virtualization kvm. Dec 16 15:25:51.028192 systemd[1]: Detected architecture x86-64. Dec 16 15:25:51.028215 systemd[1]: Detected first boot. Dec 16 15:25:51.028246 systemd[1]: Hostname set to . Dec 16 15:25:51.028284 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 15:25:51.028308 zram_generator::config[1174]: No configuration found. Dec 16 15:25:51.028332 kernel: Guest personality initialized and is inactive Dec 16 15:25:51.028365 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 15:25:51.028393 kernel: Initialized host personality Dec 16 15:25:51.028415 kernel: NET: Registered PF_VSOCK protocol family Dec 16 15:25:51.028436 systemd[1]: Populated /etc with preset unit settings. Dec 16 15:25:51.028458 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 15:25:51.028480 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 15:25:51.028502 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 15:25:51.028560 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 15:25:51.028586 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 15:25:51.028609 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 15:25:51.028630 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 15:25:51.034305 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 15:25:51.034347 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 15:25:51.034382 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 15:25:51.034414 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 15:25:51.034445 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 15:25:51.034469 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 15:25:51.034490 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 15:25:51.034530 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 15:25:51.034559 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 15:25:51.034597 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 15:25:51.034628 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 15:25:51.034651 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 15:25:51.034688 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 15:25:51.034712 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 15:25:51.034734 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 15:25:51.034762 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 15:25:51.034785 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 15:25:51.034807 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 15:25:51.034828 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 15:25:51.034851 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 15:25:51.034876 systemd[1]: Reached target slices.target - Slice Units. Dec 16 15:25:51.034898 systemd[1]: Reached target swap.target - Swaps. Dec 16 15:25:51.034920 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 15:25:51.034947 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 15:25:51.034970 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 15:25:51.034992 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 15:25:51.035014 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 15:25:51.035037 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 15:25:51.035058 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 15:25:51.035079 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 15:25:51.035106 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 15:25:51.035129 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 15:25:51.035151 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 15:25:51.035173 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 15:25:51.035196 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 15:25:51.035219 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 15:25:51.035242 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 15:25:51.035269 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 15:25:51.035291 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 15:25:51.035313 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 15:25:51.035335 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 15:25:51.035358 systemd[1]: Reached target machines.target - Containers. Dec 16 15:25:51.035380 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 15:25:51.035406 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 15:25:51.035432 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 15:25:51.035453 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 15:25:51.035475 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 15:25:51.035497 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 15:25:51.037008 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 15:25:51.037042 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 15:25:51.037074 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 15:25:51.037097 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 15:25:51.037119 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 15:25:51.037140 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 15:25:51.037163 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 15:25:51.037191 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 15:25:51.037215 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 15:25:51.037248 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 15:25:51.037271 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 15:25:51.037294 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 15:25:51.037316 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 15:25:51.037344 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 15:25:51.037367 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 15:25:51.037390 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 15:25:51.037412 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 15:25:51.037434 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 15:25:51.037455 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 15:25:51.037485 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 15:25:51.037508 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 15:25:51.037556 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 15:25:51.037580 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 15:25:51.037612 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 15:25:51.037635 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 15:25:51.037656 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 15:25:51.037693 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 15:25:51.037716 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 15:25:51.037738 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 15:25:51.037760 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 15:25:51.037796 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 15:25:51.037820 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 15:25:51.037843 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 15:25:51.037865 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 15:25:51.037888 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 15:25:51.037910 kernel: ACPI: bus type drm_connector registered Dec 16 15:25:51.037933 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 15:25:51.037960 kernel: fuse: init (API version 7.41) Dec 16 15:25:51.037982 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 15:25:51.038004 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 15:25:51.038031 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 15:25:51.038055 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 15:25:51.038077 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 15:25:51.038100 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 15:25:51.038189 systemd-journald[1265]: Collecting audit messages is enabled. Dec 16 15:25:51.038246 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 15:25:51.038273 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 15:25:51.038297 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 15:25:51.038320 systemd-journald[1265]: Journal started Dec 16 15:25:51.038366 systemd-journald[1265]: Runtime Journal (/run/log/journal/735048b69ff1420a9a9ea3f2248f5045) is 4.7M, max 37.8M, 33M free. Dec 16 15:25:50.635000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 15:25:51.042609 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 15:25:51.042682 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 15:25:50.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:50.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:50.814000 audit: BPF prog-id=14 op=UNLOAD Dec 16 15:25:50.814000 audit: BPF prog-id=13 op=UNLOAD Dec 16 15:25:50.815000 audit: BPF prog-id=15 op=LOAD Dec 16 15:25:50.815000 audit: BPF prog-id=16 op=LOAD Dec 16 15:25:50.815000 audit: BPF prog-id=17 op=LOAD Dec 16 15:25:50.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:50.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:50.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:50.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:50.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:50.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:50.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:50.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:50.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:50.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:50.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:50.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:50.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:50.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:51.019000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 15:25:51.019000 audit[1265]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7fff614d3030 a2=4000 a3=0 items=0 ppid=1 pid=1265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:25:51.019000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 15:25:50.509891 systemd[1]: Queued start job for default target multi-user.target. Dec 16 15:25:50.537294 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 15:25:50.538295 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 15:25:51.055546 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 15:25:51.062560 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 15:25:51.077547 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 15:25:51.077642 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 15:25:51.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:51.078396 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 15:25:51.086594 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 15:25:51.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:51.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:51.088112 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 15:25:51.088422 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 15:25:51.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:51.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:51.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:51.092168 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 15:25:51.114062 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 15:25:51.121818 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 15:25:51.128552 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 15:25:51.143539 kernel: loop1: detected capacity change from 0 to 119256 Dec 16 15:25:51.146191 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Dec 16 15:25:51.146619 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 15:25:51.146771 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Dec 16 15:25:51.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:51.154799 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 15:25:51.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:51.162045 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 15:25:51.170306 systemd-journald[1265]: Time spent on flushing to /var/log/journal/735048b69ff1420a9a9ea3f2248f5045 is 97.903ms for 1306 entries. Dec 16 15:25:51.170306 systemd-journald[1265]: System Journal (/var/log/journal/735048b69ff1420a9a9ea3f2248f5045) is 8M, max 588.1M, 580.1M free. Dec 16 15:25:51.294406 systemd-journald[1265]: Received client request to flush runtime journal. Dec 16 15:25:51.294475 kernel: loop2: detected capacity change from 0 to 219144 Dec 16 15:25:51.294559 kernel: loop3: detected capacity change from 0 to 8 Dec 16 15:25:51.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:51.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:51.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:51.245000 audit: BPF prog-id=18 op=LOAD Dec 16 15:25:51.247000 audit: BPF prog-id=19 op=LOAD Dec 16 15:25:51.247000 audit: BPF prog-id=20 op=LOAD Dec 16 15:25:51.251000 audit: BPF prog-id=21 op=LOAD Dec 16 15:25:51.279000 audit: BPF prog-id=22 op=LOAD Dec 16 15:25:51.279000 audit: BPF prog-id=23 op=LOAD Dec 16 15:25:51.279000 audit: BPF prog-id=24 op=LOAD Dec 16 15:25:51.285000 audit: BPF prog-id=25 op=LOAD Dec 16 15:25:51.286000 audit: BPF prog-id=26 op=LOAD Dec 16 15:25:51.286000 audit: BPF prog-id=27 op=LOAD Dec 16 15:25:51.175785 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 15:25:51.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:51.223075 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 15:25:51.243407 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 15:25:51.249829 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 15:25:51.254824 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 15:25:51.260898 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 15:25:51.281871 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 15:25:51.289832 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 15:25:51.297585 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 15:25:51.313536 kernel: loop4: detected capacity change from 0 to 111544 Dec 16 15:25:51.328385 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Dec 16 15:25:51.328811 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Dec 16 15:25:51.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:51.347921 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 15:25:51.370222 kernel: loop5: detected capacity change from 0 to 119256 Dec 16 15:25:51.370271 systemd-nsresourced[1327]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 15:25:51.375773 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 15:25:51.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:51.402547 kernel: loop6: detected capacity change from 0 to 219144 Dec 16 15:25:51.416238 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 15:25:51.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:51.424539 kernel: loop7: detected capacity change from 0 to 8 Dec 16 15:25:51.434543 kernel: loop1: detected capacity change from 0 to 111544 Dec 16 15:25:51.458027 (sd-merge)[1335]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-openstack.raw'. Dec 16 15:25:51.472825 (sd-merge)[1335]: Merged extensions into '/usr'. Dec 16 15:25:51.485386 systemd[1]: Reload requested from client PID 1288 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 15:25:51.485417 systemd[1]: Reloading... Dec 16 15:25:51.513474 systemd-oomd[1322]: No swap; memory pressure usage will be degraded Dec 16 15:25:51.580063 systemd-resolved[1323]: Positive Trust Anchors: Dec 16 15:25:51.580089 systemd-resolved[1323]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 15:25:51.580097 systemd-resolved[1323]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 15:25:51.580143 systemd-resolved[1323]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 15:25:51.606935 systemd-resolved[1323]: Using system hostname 'srv-g2i2t.gb1.brightbox.com'. Dec 16 15:25:51.618921 zram_generator::config[1379]: No configuration found. Dec 16 15:25:51.942216 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 15:25:51.943314 systemd[1]: Reloading finished in 457 ms. Dec 16 15:25:51.962835 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 15:25:51.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:51.964487 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 15:25:51.965272 kernel: kauditd_printk_skb: 103 callbacks suppressed Dec 16 15:25:51.965329 kernel: audit: type=1130 audit(1765898751.963:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:51.970468 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 15:25:51.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:51.976448 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 15:25:51.980703 kernel: audit: type=1130 audit(1765898751.969:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:51.981896 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 15:25:51.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:51.986541 kernel: audit: type=1130 audit(1765898751.970:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:51.996163 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 15:25:52.002829 systemd[1]: Starting ensure-sysext.service... Dec 16 15:25:52.010057 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 15:25:52.020000 audit: BPF prog-id=28 op=LOAD Dec 16 15:25:52.020000 audit: BPF prog-id=25 op=UNLOAD Dec 16 15:25:52.024294 kernel: audit: type=1334 audit(1765898752.020:152): prog-id=28 op=LOAD Dec 16 15:25:52.024348 kernel: audit: type=1334 audit(1765898752.020:153): prog-id=25 op=UNLOAD Dec 16 15:25:52.020000 audit: BPF prog-id=29 op=LOAD Dec 16 15:25:52.027576 kernel: audit: type=1334 audit(1765898752.020:154): prog-id=29 op=LOAD Dec 16 15:25:52.020000 audit: BPF prog-id=30 op=LOAD Dec 16 15:25:52.029881 kernel: audit: type=1334 audit(1765898752.020:155): prog-id=30 op=LOAD Dec 16 15:25:52.029934 kernel: audit: type=1334 audit(1765898752.020:156): prog-id=26 op=UNLOAD Dec 16 15:25:52.020000 audit: BPF prog-id=26 op=UNLOAD Dec 16 15:25:52.032534 kernel: audit: type=1334 audit(1765898752.020:157): prog-id=27 op=UNLOAD Dec 16 15:25:52.020000 audit: BPF prog-id=27 op=UNLOAD Dec 16 15:25:52.035591 kernel: audit: type=1334 audit(1765898752.030:158): prog-id=31 op=LOAD Dec 16 15:25:52.030000 audit: BPF prog-id=31 op=LOAD Dec 16 15:25:52.030000 audit: BPF prog-id=15 op=UNLOAD Dec 16 15:25:52.031000 audit: BPF prog-id=32 op=LOAD Dec 16 15:25:52.031000 audit: BPF prog-id=33 op=LOAD Dec 16 15:25:52.031000 audit: BPF prog-id=16 op=UNLOAD Dec 16 15:25:52.031000 audit: BPF prog-id=17 op=UNLOAD Dec 16 15:25:52.034000 audit: BPF prog-id=34 op=LOAD Dec 16 15:25:52.034000 audit: BPF prog-id=22 op=UNLOAD Dec 16 15:25:52.035000 audit: BPF prog-id=35 op=LOAD Dec 16 15:25:52.035000 audit: BPF prog-id=36 op=LOAD Dec 16 15:25:52.035000 audit: BPF prog-id=23 op=UNLOAD Dec 16 15:25:52.035000 audit: BPF prog-id=24 op=UNLOAD Dec 16 15:25:52.040000 audit: BPF prog-id=37 op=LOAD Dec 16 15:25:52.040000 audit: BPF prog-id=21 op=UNLOAD Dec 16 15:25:52.041000 audit: BPF prog-id=38 op=LOAD Dec 16 15:25:52.041000 audit: BPF prog-id=18 op=UNLOAD Dec 16 15:25:52.043000 audit: BPF prog-id=39 op=LOAD Dec 16 15:25:52.043000 audit: BPF prog-id=40 op=LOAD Dec 16 15:25:52.043000 audit: BPF prog-id=19 op=UNLOAD Dec 16 15:25:52.043000 audit: BPF prog-id=20 op=UNLOAD Dec 16 15:25:52.050017 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 15:25:52.051789 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 15:25:52.060240 systemd[1]: Reload requested from client PID 1436 ('systemctl') (unit ensure-sysext.service)... Dec 16 15:25:52.060424 systemd[1]: Reloading... Dec 16 15:25:52.086693 systemd-tmpfiles[1437]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 15:25:52.086752 systemd-tmpfiles[1437]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 15:25:52.087258 systemd-tmpfiles[1437]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 15:25:52.091498 systemd-tmpfiles[1437]: ACLs are not supported, ignoring. Dec 16 15:25:52.091646 systemd-tmpfiles[1437]: ACLs are not supported, ignoring. Dec 16 15:25:52.108916 systemd-tmpfiles[1437]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 15:25:52.108938 systemd-tmpfiles[1437]: Skipping /boot Dec 16 15:25:52.140125 systemd-tmpfiles[1437]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 15:25:52.140147 systemd-tmpfiles[1437]: Skipping /boot Dec 16 15:25:52.169549 zram_generator::config[1471]: No configuration found. Dec 16 15:25:52.477610 systemd[1]: Reloading finished in 416 ms. Dec 16 15:25:52.494710 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 15:25:52.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:52.502000 audit: BPF prog-id=41 op=LOAD Dec 16 15:25:52.502000 audit: BPF prog-id=34 op=UNLOAD Dec 16 15:25:52.502000 audit: BPF prog-id=42 op=LOAD Dec 16 15:25:52.502000 audit: BPF prog-id=43 op=LOAD Dec 16 15:25:52.502000 audit: BPF prog-id=35 op=UNLOAD Dec 16 15:25:52.502000 audit: BPF prog-id=36 op=UNLOAD Dec 16 15:25:52.503000 audit: BPF prog-id=44 op=LOAD Dec 16 15:25:52.503000 audit: BPF prog-id=38 op=UNLOAD Dec 16 15:25:52.503000 audit: BPF prog-id=45 op=LOAD Dec 16 15:25:52.503000 audit: BPF prog-id=46 op=LOAD Dec 16 15:25:52.503000 audit: BPF prog-id=39 op=UNLOAD Dec 16 15:25:52.503000 audit: BPF prog-id=40 op=UNLOAD Dec 16 15:25:52.505000 audit: BPF prog-id=47 op=LOAD Dec 16 15:25:52.505000 audit: BPF prog-id=31 op=UNLOAD Dec 16 15:25:52.505000 audit: BPF prog-id=48 op=LOAD Dec 16 15:25:52.505000 audit: BPF prog-id=49 op=LOAD Dec 16 15:25:52.506000 audit: BPF prog-id=32 op=UNLOAD Dec 16 15:25:52.506000 audit: BPF prog-id=33 op=UNLOAD Dec 16 15:25:52.507000 audit: BPF prog-id=50 op=LOAD Dec 16 15:25:52.507000 audit: BPF prog-id=28 op=UNLOAD Dec 16 15:25:52.507000 audit: BPF prog-id=51 op=LOAD Dec 16 15:25:52.507000 audit: BPF prog-id=52 op=LOAD Dec 16 15:25:52.507000 audit: BPF prog-id=29 op=UNLOAD Dec 16 15:25:52.507000 audit: BPF prog-id=30 op=UNLOAD Dec 16 15:25:52.509000 audit: BPF prog-id=53 op=LOAD Dec 16 15:25:52.509000 audit: BPF prog-id=37 op=UNLOAD Dec 16 15:25:52.520177 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 15:25:52.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:52.533570 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 15:25:52.538833 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 15:25:52.543935 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 15:25:52.549959 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 15:25:52.551000 audit: BPF prog-id=54 op=LOAD Dec 16 15:25:52.551000 audit: BPF prog-id=55 op=LOAD Dec 16 15:25:52.552000 audit: BPF prog-id=7 op=UNLOAD Dec 16 15:25:52.552000 audit: BPF prog-id=8 op=UNLOAD Dec 16 15:25:52.554416 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 15:25:52.559926 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 15:25:52.567144 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 15:25:52.567435 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 15:25:52.572560 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 15:25:52.582933 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 15:25:52.590991 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 15:25:52.591923 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 15:25:52.592224 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 15:25:52.592377 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 15:25:52.593331 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 15:25:52.601085 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 15:25:52.601383 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 15:25:52.601703 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 15:25:52.601950 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 15:25:52.602090 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 15:25:52.602228 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 15:25:52.610717 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 15:25:52.611045 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 15:25:52.615365 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 15:25:52.616316 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 15:25:52.617621 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 15:25:52.617781 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 15:25:52.617969 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 15:25:52.640134 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 15:25:52.641900 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 15:25:52.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:52.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:52.645000 audit[1535]: SYSTEM_BOOT pid=1535 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 15:25:52.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:52.653575 systemd[1]: Finished ensure-sysext.service. Dec 16 15:25:52.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:52.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:52.663132 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 15:25:52.663571 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 15:25:52.665193 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 15:25:52.666818 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 15:25:52.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:52.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:52.669064 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 15:25:52.669853 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 15:25:52.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:52.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:52.677855 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 15:25:52.678115 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 15:25:52.682000 audit: BPF prog-id=56 op=LOAD Dec 16 15:25:52.686559 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 15:25:52.690614 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 15:25:52.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:52.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:52.726331 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 15:25:52.736226 systemd-udevd[1533]: Using default interface naming scheme 'v257'. Dec 16 15:25:52.780505 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 15:25:52.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:52.781862 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 15:25:52.802897 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 15:25:52.804000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 15:25:52.804000 audit[1569]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc7d976dc0 a2=420 a3=0 items=0 ppid=1529 pid=1569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:25:52.804000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 15:25:52.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:25:52.806499 augenrules[1569]: No rules Dec 16 15:25:52.810990 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 15:25:52.813187 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 15:25:52.814623 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 15:25:52.868260 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 15:25:52.872656 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 15:25:53.046903 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 15:25:53.063837 systemd-networkd[1577]: lo: Link UP Dec 16 15:25:53.066452 systemd-networkd[1577]: lo: Gained carrier Dec 16 15:25:53.076453 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 15:25:53.077792 systemd[1]: Reached target network.target - Network. Dec 16 15:25:53.087373 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 15:25:53.093709 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 15:25:53.124500 systemd-networkd[1577]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 15:25:53.124533 systemd-networkd[1577]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 15:25:53.125931 systemd-networkd[1577]: eth0: Link UP Dec 16 15:25:53.126223 systemd-networkd[1577]: eth0: Gained carrier Dec 16 15:25:53.126251 systemd-networkd[1577]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 15:25:53.139675 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 15:25:53.143608 systemd-networkd[1577]: eth0: DHCPv4 address 10.230.25.166/30, gateway 10.230.25.165 acquired from 10.230.25.165 Dec 16 15:25:53.149154 systemd-timesyncd[1551]: Network configuration changed, trying to establish connection. Dec 16 15:25:53.152838 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 15:25:53.179856 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 15:25:53.198403 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 15:25:53.249568 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 15:25:53.274557 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Dec 16 15:25:53.289594 kernel: ACPI: button: Power Button [PWRF] Dec 16 15:25:53.369561 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 16 15:25:53.377264 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 16 15:25:53.514807 ldconfig[1531]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 15:25:53.521376 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 15:25:53.527849 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 15:25:53.563551 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 15:25:53.564761 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 15:25:53.565627 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 15:25:53.566426 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 15:25:53.567251 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 15:25:53.568207 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 15:25:53.569072 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 15:25:53.570036 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 15:25:53.570943 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 15:25:53.571670 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 15:25:53.572426 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 15:25:53.572473 systemd[1]: Reached target paths.target - Path Units. Dec 16 15:25:53.573127 systemd[1]: Reached target timers.target - Timer Units. Dec 16 15:25:53.581049 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 15:25:53.585048 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 15:25:53.590269 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 15:25:53.591347 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 15:25:53.592132 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 15:25:53.600961 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 15:25:53.602169 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 15:25:53.604489 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 15:25:53.608442 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 15:25:53.609276 systemd[1]: Reached target basic.target - Basic System. Dec 16 15:25:53.610687 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 15:25:53.610738 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 15:25:53.614830 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 15:25:53.621874 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 15:25:53.626950 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 15:25:53.632964 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 15:25:53.647110 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 15:25:53.653845 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 15:25:53.654681 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 15:25:53.659867 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 15:25:53.664876 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 15:25:53.670299 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 15:25:53.675351 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 15:25:53.674860 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 15:25:53.687873 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 15:25:53.696595 jq[1631]: false Dec 16 15:25:53.699024 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 15:25:53.699859 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 15:25:53.701719 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 15:25:53.706907 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 15:25:53.711829 google_oslogin_nss_cache[1634]: oslogin_cache_refresh[1634]: Refreshing passwd entry cache Dec 16 15:25:53.713565 oslogin_cache_refresh[1634]: Refreshing passwd entry cache Dec 16 15:25:53.718795 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 15:25:53.728737 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 15:25:53.730204 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 15:25:53.730912 oslogin_cache_refresh[1634]: Failure getting users, quitting Dec 16 15:25:53.731816 google_oslogin_nss_cache[1634]: oslogin_cache_refresh[1634]: Failure getting users, quitting Dec 16 15:25:53.731816 google_oslogin_nss_cache[1634]: oslogin_cache_refresh[1634]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 15:25:53.731816 google_oslogin_nss_cache[1634]: oslogin_cache_refresh[1634]: Refreshing group entry cache Dec 16 15:25:53.731630 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 15:25:53.730949 oslogin_cache_refresh[1634]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 15:25:53.731038 oslogin_cache_refresh[1634]: Refreshing group entry cache Dec 16 15:25:53.735545 google_oslogin_nss_cache[1634]: oslogin_cache_refresh[1634]: Failure getting groups, quitting Dec 16 15:25:53.735545 google_oslogin_nss_cache[1634]: oslogin_cache_refresh[1634]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 15:25:53.734505 oslogin_cache_refresh[1634]: Failure getting groups, quitting Dec 16 15:25:53.734557 oslogin_cache_refresh[1634]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 15:25:53.736190 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 15:25:53.737713 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 15:25:53.791901 extend-filesystems[1633]: Found /dev/vda6 Dec 16 15:25:53.795039 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 15:25:53.795453 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 15:25:53.809677 extend-filesystems[1633]: Found /dev/vda9 Dec 16 15:25:53.830644 update_engine[1642]: I20251216 15:25:53.829492 1642 main.cc:92] Flatcar Update Engine starting Dec 16 15:25:53.837693 extend-filesystems[1633]: Checking size of /dev/vda9 Dec 16 15:25:53.846456 jq[1643]: true Dec 16 15:25:53.861434 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 15:25:53.894342 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 15:25:53.894889 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 15:25:53.912885 tar[1653]: linux-amd64/LICENSE Dec 16 15:25:53.915339 tar[1653]: linux-amd64/helm Dec 16 15:25:53.917239 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 15:25:53.939270 dbus-daemon[1629]: [system] SELinux support is enabled Dec 16 15:25:53.939631 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 15:25:53.945697 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 15:25:53.945745 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 15:25:53.946987 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 15:25:53.947035 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 15:25:53.948819 jq[1671]: true Dec 16 15:25:53.957867 extend-filesystems[1633]: Resized partition /dev/vda9 Dec 16 15:25:53.967466 extend-filesystems[1687]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 15:25:53.981365 dbus-daemon[1629]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1577 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 16 15:25:53.996584 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 14138363 blocks Dec 16 15:25:53.999809 dbus-daemon[1629]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 15:25:54.003393 update_engine[1642]: I20251216 15:25:54.003327 1642 update_check_scheduler.cc:74] Next update check in 5m46s Dec 16 15:25:54.004508 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 16 15:25:54.005718 systemd[1]: Started update-engine.service - Update Engine. Dec 16 15:25:54.036499 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 15:25:54.212485 bash[1706]: Updated "/home/core/.ssh/authorized_keys" Dec 16 15:25:54.217025 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 15:25:54.229895 systemd[1]: Starting sshkeys.service... Dec 16 15:25:54.306046 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 15:25:54.310960 systemd-logind[1641]: Watching system buttons on /dev/input/event3 (Power Button) Dec 16 15:25:54.311006 systemd-logind[1641]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 15:25:54.370569 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Dec 16 15:25:54.379332 dbus-daemon[1629]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 16 15:25:54.382733 dbus-daemon[1629]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1692 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 16 15:25:54.388680 extend-filesystems[1687]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 15:25:54.388680 extend-filesystems[1687]: old_desc_blocks = 1, new_desc_blocks = 7 Dec 16 15:25:54.388680 extend-filesystems[1687]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Dec 16 15:25:54.453535 extend-filesystems[1633]: Resized filesystem in /dev/vda9 Dec 16 15:25:54.425372 systemd-logind[1641]: New seat seat0. Dec 16 15:25:54.436964 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 15:25:54.439502 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 16 15:25:54.461318 containerd[1667]: time="2025-12-16T15:25:54Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 15:25:54.442068 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 15:25:54.447418 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 15:25:54.454453 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 15:25:54.464112 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 15:25:54.507087 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 15:25:54.515901 containerd[1667]: time="2025-12-16T15:25:54.515504578Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 15:25:54.523101 systemd[1]: Starting polkit.service - Authorization Manager... Dec 16 15:25:54.612456 containerd[1667]: time="2025-12-16T15:25:54.612379761Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="25.305µs" Dec 16 15:25:54.612456 containerd[1667]: time="2025-12-16T15:25:54.612443040Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 15:25:54.612674 containerd[1667]: time="2025-12-16T15:25:54.612557026Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 15:25:54.612674 containerd[1667]: time="2025-12-16T15:25:54.612589412Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 15:25:54.615567 containerd[1667]: time="2025-12-16T15:25:54.612926682Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 15:25:54.615567 containerd[1667]: time="2025-12-16T15:25:54.612967818Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 15:25:54.615567 containerd[1667]: time="2025-12-16T15:25:54.613095838Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 15:25:54.615567 containerd[1667]: time="2025-12-16T15:25:54.613117885Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 15:25:54.615567 containerd[1667]: time="2025-12-16T15:25:54.613436625Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 15:25:54.615567 containerd[1667]: time="2025-12-16T15:25:54.613463172Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 15:25:54.615567 containerd[1667]: time="2025-12-16T15:25:54.613486620Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 15:25:54.615567 containerd[1667]: time="2025-12-16T15:25:54.613505487Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 15:25:54.615567 containerd[1667]: time="2025-12-16T15:25:54.613826114Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 15:25:54.615567 containerd[1667]: time="2025-12-16T15:25:54.613851042Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 15:25:54.615567 containerd[1667]: time="2025-12-16T15:25:54.613993435Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 15:25:54.615567 containerd[1667]: time="2025-12-16T15:25:54.614324615Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 15:25:54.616011 containerd[1667]: time="2025-12-16T15:25:54.614382473Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 15:25:54.616011 containerd[1667]: time="2025-12-16T15:25:54.614408367Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 15:25:54.616619 containerd[1667]: time="2025-12-16T15:25:54.616417247Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 15:25:54.617592 containerd[1667]: time="2025-12-16T15:25:54.616900565Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 15:25:54.617592 containerd[1667]: time="2025-12-16T15:25:54.617024280Z" level=info msg="metadata content store policy set" policy=shared Dec 16 15:25:54.621778 containerd[1667]: time="2025-12-16T15:25:54.620910547Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 15:25:54.621778 containerd[1667]: time="2025-12-16T15:25:54.620994465Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 15:25:54.621778 containerd[1667]: time="2025-12-16T15:25:54.621118732Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 15:25:54.621778 containerd[1667]: time="2025-12-16T15:25:54.621141366Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 15:25:54.621778 containerd[1667]: time="2025-12-16T15:25:54.621161469Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 15:25:54.621778 containerd[1667]: time="2025-12-16T15:25:54.621186838Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 15:25:54.621778 containerd[1667]: time="2025-12-16T15:25:54.621208336Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 15:25:54.621778 containerd[1667]: time="2025-12-16T15:25:54.621225754Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 15:25:54.621778 containerd[1667]: time="2025-12-16T15:25:54.621244615Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 15:25:54.621778 containerd[1667]: time="2025-12-16T15:25:54.621346969Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 15:25:54.621778 containerd[1667]: time="2025-12-16T15:25:54.621391025Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 15:25:54.621778 containerd[1667]: time="2025-12-16T15:25:54.621413040Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 15:25:54.621778 containerd[1667]: time="2025-12-16T15:25:54.621431905Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 15:25:54.621778 containerd[1667]: time="2025-12-16T15:25:54.621451702Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 15:25:54.622254 containerd[1667]: time="2025-12-16T15:25:54.621794493Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 15:25:54.622254 containerd[1667]: time="2025-12-16T15:25:54.621841391Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 15:25:54.622254 containerd[1667]: time="2025-12-16T15:25:54.621866267Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 15:25:54.622254 containerd[1667]: time="2025-12-16T15:25:54.621907140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 15:25:54.622254 containerd[1667]: time="2025-12-16T15:25:54.621930895Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 15:25:54.622254 containerd[1667]: time="2025-12-16T15:25:54.621960009Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 15:25:54.622254 containerd[1667]: time="2025-12-16T15:25:54.621984932Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 15:25:54.622254 containerd[1667]: time="2025-12-16T15:25:54.622005100Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 15:25:54.622254 containerd[1667]: time="2025-12-16T15:25:54.622026101Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 15:25:54.622254 containerd[1667]: time="2025-12-16T15:25:54.622043936Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 15:25:54.622254 containerd[1667]: time="2025-12-16T15:25:54.622063701Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 15:25:54.622254 containerd[1667]: time="2025-12-16T15:25:54.622106375Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 15:25:54.622254 containerd[1667]: time="2025-12-16T15:25:54.622195107Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 15:25:54.622254 containerd[1667]: time="2025-12-16T15:25:54.622223330Z" level=info msg="Start snapshots syncer" Dec 16 15:25:54.622254 containerd[1667]: time="2025-12-16T15:25:54.622254389Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 15:25:54.628533 containerd[1667]: time="2025-12-16T15:25:54.623019575Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 15:25:54.628533 containerd[1667]: time="2025-12-16T15:25:54.623265271Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 15:25:54.628869 containerd[1667]: time="2025-12-16T15:25:54.623358515Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 15:25:54.628869 containerd[1667]: time="2025-12-16T15:25:54.623555578Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 15:25:54.628869 containerd[1667]: time="2025-12-16T15:25:54.623605629Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 15:25:54.628869 containerd[1667]: time="2025-12-16T15:25:54.623626267Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 15:25:54.628869 containerd[1667]: time="2025-12-16T15:25:54.623646099Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 15:25:54.628869 containerd[1667]: time="2025-12-16T15:25:54.623675712Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 15:25:54.628869 containerd[1667]: time="2025-12-16T15:25:54.623706341Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 15:25:54.628869 containerd[1667]: time="2025-12-16T15:25:54.623726568Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 15:25:54.628869 containerd[1667]: time="2025-12-16T15:25:54.623746369Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 15:25:54.628869 containerd[1667]: time="2025-12-16T15:25:54.623765140Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 15:25:54.628869 containerd[1667]: time="2025-12-16T15:25:54.623810344Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 15:25:54.628869 containerd[1667]: time="2025-12-16T15:25:54.623841921Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 15:25:54.628869 containerd[1667]: time="2025-12-16T15:25:54.623856816Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 15:25:54.629335 containerd[1667]: time="2025-12-16T15:25:54.623873275Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 15:25:54.629335 containerd[1667]: time="2025-12-16T15:25:54.623887658Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 15:25:54.629335 containerd[1667]: time="2025-12-16T15:25:54.623909668Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 15:25:54.629335 containerd[1667]: time="2025-12-16T15:25:54.623928479Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 15:25:54.629335 containerd[1667]: time="2025-12-16T15:25:54.623971910Z" level=info msg="runtime interface created" Dec 16 15:25:54.629335 containerd[1667]: time="2025-12-16T15:25:54.623988824Z" level=info msg="created NRI interface" Dec 16 15:25:54.629335 containerd[1667]: time="2025-12-16T15:25:54.624003857Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 15:25:54.629335 containerd[1667]: time="2025-12-16T15:25:54.624022351Z" level=info msg="Connect containerd service" Dec 16 15:25:54.629335 containerd[1667]: time="2025-12-16T15:25:54.624057991Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 15:25:54.629335 containerd[1667]: time="2025-12-16T15:25:54.625701677Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 15:25:54.838763 containerd[1667]: time="2025-12-16T15:25:54.829731184Z" level=info msg="Start subscribing containerd event" Dec 16 15:25:54.838763 containerd[1667]: time="2025-12-16T15:25:54.829878926Z" level=info msg="Start recovering state" Dec 16 15:25:54.838763 containerd[1667]: time="2025-12-16T15:25:54.830084792Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 15:25:54.838763 containerd[1667]: time="2025-12-16T15:25:54.830204196Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 15:25:54.838763 containerd[1667]: time="2025-12-16T15:25:54.830211319Z" level=info msg="Start event monitor" Dec 16 15:25:54.838763 containerd[1667]: time="2025-12-16T15:25:54.830315305Z" level=info msg="Start cni network conf syncer for default" Dec 16 15:25:54.838763 containerd[1667]: time="2025-12-16T15:25:54.830356267Z" level=info msg="Start streaming server" Dec 16 15:25:54.838763 containerd[1667]: time="2025-12-16T15:25:54.830391961Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 15:25:54.838763 containerd[1667]: time="2025-12-16T15:25:54.830430499Z" level=info msg="runtime interface starting up..." Dec 16 15:25:54.838763 containerd[1667]: time="2025-12-16T15:25:54.830452339Z" level=info msg="starting plugins..." Dec 16 15:25:54.838763 containerd[1667]: time="2025-12-16T15:25:54.830488827Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 15:25:54.831137 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 15:25:54.848528 containerd[1667]: time="2025-12-16T15:25:54.835680444Z" level=info msg="containerd successfully booted in 0.375576s" Dec 16 15:25:54.876626 locksmithd[1694]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 15:25:54.902437 polkitd[1724]: Started polkitd version 126 Dec 16 15:25:54.914426 polkitd[1724]: Loading rules from directory /etc/polkit-1/rules.d Dec 16 15:25:54.917049 polkitd[1724]: Loading rules from directory /run/polkit-1/rules.d Dec 16 15:25:54.917141 polkitd[1724]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 15:25:54.917508 polkitd[1724]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 16 15:25:54.917788 polkitd[1724]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 15:25:54.917854 polkitd[1724]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 16 15:25:54.920932 polkitd[1724]: Finished loading, compiling and executing 2 rules Dec 16 15:25:54.921477 systemd[1]: Started polkit.service - Authorization Manager. Dec 16 15:25:54.923132 dbus-daemon[1629]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 16 15:25:54.923981 polkitd[1724]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 16 15:25:54.948889 systemd-hostnamed[1692]: Hostname set to (static) Dec 16 15:25:54.955752 systemd-networkd[1577]: eth0: Gained IPv6LL Dec 16 15:25:54.958759 systemd-timesyncd[1551]: Network configuration changed, trying to establish connection. Dec 16 15:25:54.963590 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 15:25:54.965455 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 15:25:54.976068 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 15:25:54.982370 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 15:25:55.061645 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 15:25:55.089167 sshd_keygen[1673]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 15:25:55.159178 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 15:25:55.185051 tar[1653]: linux-amd64/README.md Dec 16 15:25:55.202355 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 15:25:55.216138 systemd[1]: Started sshd@0-10.230.25.166:22-139.178.89.65:33656.service - OpenSSH per-connection server daemon (139.178.89.65:33656). Dec 16 15:25:55.221501 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 15:25:55.229419 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 15:25:55.231542 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 15:25:55.241028 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 15:25:55.289073 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 15:25:55.294066 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 15:25:55.299238 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 15:25:55.300990 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 15:25:55.737563 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 15:25:55.737839 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 15:25:56.062248 sshd[1773]: Accepted publickey for core from 139.178.89.65 port 33656 ssh2: RSA SHA256:rUwtMs6AM7ga7enH7ssLkmEdit4Ilf+ScAl9FSBwKtA Dec 16 15:25:56.065106 sshd-session[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 15:25:56.092328 systemd-logind[1641]: New session 1 of user core. Dec 16 15:25:56.097064 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 15:25:56.107579 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 15:25:56.122568 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 15:25:56.139744 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 15:25:56.139970 (kubelet)[1791]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 15:25:56.145122 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 15:25:56.164363 (systemd)[1794]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 15:25:56.170336 systemd-logind[1641]: New session c1 of user core. Dec 16 15:25:56.354212 systemd[1794]: Queued start job for default target default.target. Dec 16 15:25:56.361379 systemd[1794]: Created slice app.slice - User Application Slice. Dec 16 15:25:56.361431 systemd[1794]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 15:25:56.361457 systemd[1794]: Reached target paths.target - Paths. Dec 16 15:25:56.362119 systemd[1794]: Reached target timers.target - Timers. Dec 16 15:25:56.366650 systemd[1794]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 15:25:56.369968 systemd[1794]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 15:25:56.388572 systemd[1794]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 15:25:56.388711 systemd[1794]: Reached target sockets.target - Sockets. Dec 16 15:25:56.396317 systemd[1794]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 15:25:56.396572 systemd[1794]: Reached target basic.target - Basic System. Dec 16 15:25:56.396663 systemd[1794]: Reached target default.target - Main User Target. Dec 16 15:25:56.396732 systemd[1794]: Startup finished in 215ms. Dec 16 15:25:56.397047 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 15:25:56.407826 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 15:25:56.463766 systemd-timesyncd[1551]: Network configuration changed, trying to establish connection. Dec 16 15:25:56.464450 systemd-networkd[1577]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8669:24:19ff:fee6:19a6/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8669:24:19ff:fee6:19a6/64 assigned by NDisc. Dec 16 15:25:56.464461 systemd-networkd[1577]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 16 15:25:56.706667 kubelet[1791]: E1216 15:25:56.706441 1791 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 15:25:56.709714 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 15:25:56.710010 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 15:25:56.710898 systemd[1]: kubelet.service: Consumed 1.015s CPU time, 257.4M memory peak. Dec 16 15:25:56.899652 systemd[1]: Started sshd@1-10.230.25.166:22-139.178.89.65:33666.service - OpenSSH per-connection server daemon (139.178.89.65:33666). Dec 16 15:25:57.758559 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 15:25:57.760654 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 15:25:57.772687 sshd[1814]: Accepted publickey for core from 139.178.89.65 port 33666 ssh2: RSA SHA256:rUwtMs6AM7ga7enH7ssLkmEdit4Ilf+ScAl9FSBwKtA Dec 16 15:25:57.777344 sshd-session[1814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 15:25:57.786943 systemd-logind[1641]: New session 2 of user core. Dec 16 15:25:57.796971 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 15:25:57.964965 systemd-timesyncd[1551]: Network configuration changed, trying to establish connection. Dec 16 15:25:58.259657 sshd[1819]: Connection closed by 139.178.89.65 port 33666 Dec 16 15:25:58.260484 sshd-session[1814]: pam_unix(sshd:session): session closed for user core Dec 16 15:25:58.268123 systemd[1]: sshd@1-10.230.25.166:22-139.178.89.65:33666.service: Deactivated successfully. Dec 16 15:25:58.270721 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 15:25:58.272292 systemd-logind[1641]: Session 2 logged out. Waiting for processes to exit. Dec 16 15:25:58.274098 systemd-logind[1641]: Removed session 2. Dec 16 15:25:58.409114 systemd[1]: Started sshd@2-10.230.25.166:22-139.178.89.65:33674.service - OpenSSH per-connection server daemon (139.178.89.65:33674). Dec 16 15:25:59.196695 sshd[1825]: Accepted publickey for core from 139.178.89.65 port 33674 ssh2: RSA SHA256:rUwtMs6AM7ga7enH7ssLkmEdit4Ilf+ScAl9FSBwKtA Dec 16 15:25:59.198566 sshd-session[1825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 15:25:59.208089 systemd-logind[1641]: New session 3 of user core. Dec 16 15:25:59.215065 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 15:25:59.641542 sshd[1828]: Connection closed by 139.178.89.65 port 33674 Dec 16 15:25:59.643256 sshd-session[1825]: pam_unix(sshd:session): session closed for user core Dec 16 15:25:59.648852 systemd[1]: sshd@2-10.230.25.166:22-139.178.89.65:33674.service: Deactivated successfully. Dec 16 15:25:59.651787 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 15:25:59.653351 systemd-logind[1641]: Session 3 logged out. Waiting for processes to exit. Dec 16 15:25:59.655494 systemd-logind[1641]: Removed session 3. Dec 16 15:26:00.395681 login[1782]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 16 15:26:00.404073 systemd-logind[1641]: New session 4 of user core. Dec 16 15:26:00.413982 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 15:26:00.729230 login[1783]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 16 15:26:00.739879 systemd-logind[1641]: New session 5 of user core. Dec 16 15:26:00.745855 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 15:26:01.780542 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 15:26:01.780748 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 15:26:01.790468 coreos-metadata[1717]: Dec 16 15:26:01.790 WARN failed to locate config-drive, using the metadata service API instead Dec 16 15:26:01.794919 coreos-metadata[1628]: Dec 16 15:26:01.794 WARN failed to locate config-drive, using the metadata service API instead Dec 16 15:26:01.816622 coreos-metadata[1717]: Dec 16 15:26:01.816 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 16 15:26:01.817410 coreos-metadata[1628]: Dec 16 15:26:01.817 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Dec 16 15:26:01.822496 coreos-metadata[1628]: Dec 16 15:26:01.822 INFO Fetch failed with 404: resource not found Dec 16 15:26:01.822496 coreos-metadata[1628]: Dec 16 15:26:01.822 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 16 15:26:01.823542 coreos-metadata[1628]: Dec 16 15:26:01.823 INFO Fetch successful Dec 16 15:26:01.823640 coreos-metadata[1628]: Dec 16 15:26:01.823 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 16 15:26:01.835778 coreos-metadata[1628]: Dec 16 15:26:01.835 INFO Fetch successful Dec 16 15:26:01.836143 coreos-metadata[1628]: Dec 16 15:26:01.836 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 16 15:26:01.840169 coreos-metadata[1717]: Dec 16 15:26:01.840 INFO Fetch successful Dec 16 15:26:01.840169 coreos-metadata[1717]: Dec 16 15:26:01.840 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 16 15:26:01.846942 coreos-metadata[1628]: Dec 16 15:26:01.846 INFO Fetch successful Dec 16 15:26:01.847130 coreos-metadata[1628]: Dec 16 15:26:01.846 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 16 15:26:01.861395 coreos-metadata[1628]: Dec 16 15:26:01.861 INFO Fetch successful Dec 16 15:26:01.861695 coreos-metadata[1628]: Dec 16 15:26:01.861 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 16 15:26:01.867997 coreos-metadata[1717]: Dec 16 15:26:01.867 INFO Fetch successful Dec 16 15:26:01.873893 unknown[1717]: wrote ssh authorized keys file for user: core Dec 16 15:26:01.881542 coreos-metadata[1628]: Dec 16 15:26:01.880 INFO Fetch successful Dec 16 15:26:01.899012 update-ssh-keys[1863]: Updated "/home/core/.ssh/authorized_keys" Dec 16 15:26:01.902121 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 15:26:01.904745 systemd[1]: Finished sshkeys.service. Dec 16 15:26:01.920825 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 15:26:01.921954 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 15:26:01.922182 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 15:26:01.924728 systemd[1]: Startup finished in 3.410s (kernel) + 13.771s (initrd) + 12.344s (userspace) = 29.527s. Dec 16 15:26:06.960786 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 15:26:06.963548 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 15:26:07.309433 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 15:26:07.334404 (kubelet)[1878]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 15:26:07.440680 kubelet[1878]: E1216 15:26:07.440585 1878 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 15:26:07.445246 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 15:26:07.445541 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 15:26:07.446643 systemd[1]: kubelet.service: Consumed 255ms CPU time, 110.5M memory peak. Dec 16 15:26:09.807643 systemd[1]: Started sshd@3-10.230.25.166:22-139.178.89.65:40408.service - OpenSSH per-connection server daemon (139.178.89.65:40408). Dec 16 15:26:10.599077 sshd[1887]: Accepted publickey for core from 139.178.89.65 port 40408 ssh2: RSA SHA256:rUwtMs6AM7ga7enH7ssLkmEdit4Ilf+ScAl9FSBwKtA Dec 16 15:26:10.600885 sshd-session[1887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 15:26:10.608896 systemd-logind[1641]: New session 6 of user core. Dec 16 15:26:10.624910 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 15:26:11.039944 sshd[1890]: Connection closed by 139.178.89.65 port 40408 Dec 16 15:26:11.040838 sshd-session[1887]: pam_unix(sshd:session): session closed for user core Dec 16 15:26:11.047188 systemd[1]: sshd@3-10.230.25.166:22-139.178.89.65:40408.service: Deactivated successfully. Dec 16 15:26:11.049464 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 15:26:11.050797 systemd-logind[1641]: Session 6 logged out. Waiting for processes to exit. Dec 16 15:26:11.053091 systemd-logind[1641]: Removed session 6. Dec 16 15:26:11.225763 systemd[1]: Started sshd@4-10.230.25.166:22-139.178.89.65:47844.service - OpenSSH per-connection server daemon (139.178.89.65:47844). Dec 16 15:26:12.084706 sshd[1896]: Accepted publickey for core from 139.178.89.65 port 47844 ssh2: RSA SHA256:rUwtMs6AM7ga7enH7ssLkmEdit4Ilf+ScAl9FSBwKtA Dec 16 15:26:12.086621 sshd-session[1896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 15:26:12.095600 systemd-logind[1641]: New session 7 of user core. Dec 16 15:26:12.104770 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 15:26:12.564654 sshd[1899]: Connection closed by 139.178.89.65 port 47844 Dec 16 15:26:12.564378 sshd-session[1896]: pam_unix(sshd:session): session closed for user core Dec 16 15:26:12.569423 systemd[1]: sshd@4-10.230.25.166:22-139.178.89.65:47844.service: Deactivated successfully. Dec 16 15:26:12.572296 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 15:26:12.574269 systemd-logind[1641]: Session 7 logged out. Waiting for processes to exit. Dec 16 15:26:12.577110 systemd-logind[1641]: Removed session 7. Dec 16 15:26:12.707592 systemd[1]: Started sshd@5-10.230.25.166:22-139.178.89.65:47848.service - OpenSSH per-connection server daemon (139.178.89.65:47848). Dec 16 15:26:13.477468 sshd[1905]: Accepted publickey for core from 139.178.89.65 port 47848 ssh2: RSA SHA256:rUwtMs6AM7ga7enH7ssLkmEdit4Ilf+ScAl9FSBwKtA Dec 16 15:26:13.479220 sshd-session[1905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 15:26:13.486973 systemd-logind[1641]: New session 8 of user core. Dec 16 15:26:13.496261 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 15:26:13.915956 sshd[1908]: Connection closed by 139.178.89.65 port 47848 Dec 16 15:26:13.917110 sshd-session[1905]: pam_unix(sshd:session): session closed for user core Dec 16 15:26:13.924164 systemd[1]: sshd@5-10.230.25.166:22-139.178.89.65:47848.service: Deactivated successfully. Dec 16 15:26:13.927141 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 15:26:13.928734 systemd-logind[1641]: Session 8 logged out. Waiting for processes to exit. Dec 16 15:26:13.930924 systemd-logind[1641]: Removed session 8. Dec 16 15:26:14.081832 systemd[1]: Started sshd@6-10.230.25.166:22-139.178.89.65:47854.service - OpenSSH per-connection server daemon (139.178.89.65:47854). Dec 16 15:26:14.870479 sshd[1914]: Accepted publickey for core from 139.178.89.65 port 47854 ssh2: RSA SHA256:rUwtMs6AM7ga7enH7ssLkmEdit4Ilf+ScAl9FSBwKtA Dec 16 15:26:14.872238 sshd-session[1914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 15:26:14.879844 systemd-logind[1641]: New session 9 of user core. Dec 16 15:26:14.890826 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 15:26:15.184627 sudo[1918]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 15:26:15.185107 sudo[1918]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 15:26:15.206807 sudo[1918]: pam_unix(sudo:session): session closed for user root Dec 16 15:26:15.352749 sshd[1917]: Connection closed by 139.178.89.65 port 47854 Dec 16 15:26:15.353909 sshd-session[1914]: pam_unix(sshd:session): session closed for user core Dec 16 15:26:15.360752 systemd[1]: sshd@6-10.230.25.166:22-139.178.89.65:47854.service: Deactivated successfully. Dec 16 15:26:15.364227 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 15:26:15.366921 systemd-logind[1641]: Session 9 logged out. Waiting for processes to exit. Dec 16 15:26:15.368976 systemd-logind[1641]: Removed session 9. Dec 16 15:26:15.561723 systemd[1]: Started sshd@7-10.230.25.166:22-139.178.89.65:47862.service - OpenSSH per-connection server daemon (139.178.89.65:47862). Dec 16 15:26:16.437899 sshd[1924]: Accepted publickey for core from 139.178.89.65 port 47862 ssh2: RSA SHA256:rUwtMs6AM7ga7enH7ssLkmEdit4Ilf+ScAl9FSBwKtA Dec 16 15:26:16.439795 sshd-session[1924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 15:26:16.450596 systemd-logind[1641]: New session 10 of user core. Dec 16 15:26:16.456869 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 15:26:16.771194 sudo[1929]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 15:26:16.772217 sudo[1929]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 15:26:16.779187 sudo[1929]: pam_unix(sudo:session): session closed for user root Dec 16 15:26:16.788945 sudo[1928]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 15:26:16.789359 sudo[1928]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 15:26:16.805340 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 15:26:16.851000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 15:26:16.852902 kernel: kauditd_printk_skb: 69 callbacks suppressed Dec 16 15:26:16.852980 kernel: audit: type=1305 audit(1765898776.851:226): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 15:26:16.853031 augenrules[1951]: No rules Dec 16 15:26:16.855085 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 15:26:16.855488 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 15:26:16.851000 audit[1951]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffac1a36b0 a2=420 a3=0 items=0 ppid=1932 pid=1951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:16.858057 kernel: audit: type=1300 audit(1765898776.851:226): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffac1a36b0 a2=420 a3=0 items=0 ppid=1932 pid=1951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:16.861762 sudo[1928]: pam_unix(sudo:session): session closed for user root Dec 16 15:26:16.851000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 15:26:16.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:26:16.871948 kernel: audit: type=1327 audit(1765898776.851:226): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 15:26:16.872089 kernel: audit: type=1130 audit(1765898776.856:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:26:16.872141 kernel: audit: type=1131 audit(1765898776.856:228): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:26:16.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:26:16.861000 audit[1928]: USER_END pid=1928 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 15:26:16.877404 kernel: audit: type=1106 audit(1765898776.861:229): pid=1928 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 15:26:16.861000 audit[1928]: CRED_DISP pid=1928 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 15:26:16.881663 kernel: audit: type=1104 audit(1765898776.861:230): pid=1928 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 15:26:17.026678 sshd[1927]: Connection closed by 139.178.89.65 port 47862 Dec 16 15:26:17.028464 sshd-session[1924]: pam_unix(sshd:session): session closed for user core Dec 16 15:26:17.030000 audit[1924]: USER_END pid=1924 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:26:17.037548 kernel: audit: type=1106 audit(1765898777.030:231): pid=1924 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:26:17.031000 audit[1924]: CRED_DISP pid=1924 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:26:17.037961 systemd[1]: sshd@7-10.230.25.166:22-139.178.89.65:47862.service: Deactivated successfully. Dec 16 15:26:17.042571 kernel: audit: type=1104 audit(1765898777.031:232): pid=1924 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:26:17.040981 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 15:26:17.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.230.25.166:22-139.178.89.65:47862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:26:17.044411 systemd-logind[1641]: Session 10 logged out. Waiting for processes to exit. Dec 16 15:26:17.046883 systemd-logind[1641]: Removed session 10. Dec 16 15:26:17.047556 kernel: audit: type=1131 audit(1765898777.037:233): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.230.25.166:22-139.178.89.65:47862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:26:17.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.25.166:22-139.178.89.65:47864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:26:17.171348 systemd[1]: Started sshd@8-10.230.25.166:22-139.178.89.65:47864.service - OpenSSH per-connection server daemon (139.178.89.65:47864). Dec 16 15:26:17.567176 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 15:26:17.571417 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 15:26:17.766365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 15:26:17.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:26:17.796236 (kubelet)[1971]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 15:26:17.866632 kubelet[1971]: E1216 15:26:17.866396 1971 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 15:26:17.869675 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 15:26:17.870077 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 15:26:17.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 15:26:17.871143 systemd[1]: kubelet.service: Consumed 220ms CPU time, 108.3M memory peak. Dec 16 15:26:17.966000 audit[1960]: USER_ACCT pid=1960 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:26:17.967166 sshd[1960]: Accepted publickey for core from 139.178.89.65 port 47864 ssh2: RSA SHA256:rUwtMs6AM7ga7enH7ssLkmEdit4Ilf+ScAl9FSBwKtA Dec 16 15:26:17.967000 audit[1960]: CRED_ACQ pid=1960 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:26:17.968000 audit[1960]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3ebd78b0 a2=3 a3=0 items=0 ppid=1 pid=1960 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:17.968000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 15:26:17.969135 sshd-session[1960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 15:26:17.976853 systemd-logind[1641]: New session 11 of user core. Dec 16 15:26:17.986887 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 15:26:17.991000 audit[1960]: USER_START pid=1960 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:26:17.994000 audit[1978]: CRED_ACQ pid=1978 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:26:18.266000 audit[1979]: USER_ACCT pid=1979 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 15:26:18.267702 sudo[1979]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 15:26:18.268142 sudo[1979]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 15:26:18.267000 audit[1979]: CRED_REFR pid=1979 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 15:26:18.271000 audit[1979]: USER_START pid=1979 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 15:26:18.790997 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 15:26:18.813053 (dockerd)[1997]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 15:26:19.200900 dockerd[1997]: time="2025-12-16T15:26:19.200712313Z" level=info msg="Starting up" Dec 16 15:26:19.204196 dockerd[1997]: time="2025-12-16T15:26:19.204161509Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 15:26:19.224079 dockerd[1997]: time="2025-12-16T15:26:19.223946325Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 15:26:19.245009 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3387459520-merged.mount: Deactivated successfully. Dec 16 15:26:19.265842 systemd[1]: var-lib-docker-metacopy\x2dcheck3491775734-merged.mount: Deactivated successfully. Dec 16 15:26:19.289560 dockerd[1997]: time="2025-12-16T15:26:19.288713299Z" level=info msg="Loading containers: start." Dec 16 15:26:19.315586 kernel: Initializing XFRM netlink socket Dec 16 15:26:19.406000 audit[2047]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:26:19.406000 audit[2047]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffcb00141e0 a2=0 a3=0 items=0 ppid=1997 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.406000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 15:26:19.412000 audit[2049]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:26:19.412000 audit[2049]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe5f9f8cd0 a2=0 a3=0 items=0 ppid=1997 pid=2049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.412000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 15:26:19.416000 audit[2051]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:26:19.416000 audit[2051]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd4500b5f0 a2=0 a3=0 items=0 ppid=1997 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.416000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 15:26:19.419000 audit[2053]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:26:19.419000 audit[2053]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff4ddac350 a2=0 a3=0 items=0 ppid=1997 pid=2053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.419000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 15:26:19.422000 audit[2055]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2055 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:26:19.422000 audit[2055]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffdb0bda90 a2=0 a3=0 items=0 ppid=1997 pid=2055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.422000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 15:26:19.426000 audit[2057]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2057 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:26:19.426000 audit[2057]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffcd16b240 a2=0 a3=0 items=0 ppid=1997 pid=2057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.426000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 15:26:19.429000 audit[2059]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:26:19.429000 audit[2059]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff219d3680 a2=0 a3=0 items=0 ppid=1997 pid=2059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.429000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 15:26:19.432000 audit[2061]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2061 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:26:19.432000 audit[2061]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffeecbf17c0 a2=0 a3=0 items=0 ppid=1997 pid=2061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.432000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 15:26:19.493000 audit[2064]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2064 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:26:19.493000 audit[2064]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7fff5f268370 a2=0 a3=0 items=0 ppid=1997 pid=2064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.493000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 15:26:19.496000 audit[2066]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2066 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:26:19.496000 audit[2066]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffc3477370 a2=0 a3=0 items=0 ppid=1997 pid=2066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.496000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 15:26:19.500000 audit[2068]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2068 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:26:19.500000 audit[2068]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff677f2f30 a2=0 a3=0 items=0 ppid=1997 pid=2068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.500000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 15:26:19.503000 audit[2070]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2070 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:26:19.503000 audit[2070]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fffa4e66060 a2=0 a3=0 items=0 ppid=1997 pid=2070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.503000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 15:26:19.506000 audit[2072]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2072 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:26:19.506000 audit[2072]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff41cd44a0 a2=0 a3=0 items=0 ppid=1997 pid=2072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.506000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 15:26:19.565000 audit[2102]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2102 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:26:19.565000 audit[2102]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff686241e0 a2=0 a3=0 items=0 ppid=1997 pid=2102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.565000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 15:26:19.568000 audit[2104]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2104 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:26:19.568000 audit[2104]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc429934e0 a2=0 a3=0 items=0 ppid=1997 pid=2104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.568000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 15:26:19.572000 audit[2106]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2106 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:26:19.572000 audit[2106]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb32e2880 a2=0 a3=0 items=0 ppid=1997 pid=2106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.572000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 15:26:19.575000 audit[2108]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2108 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:26:19.575000 audit[2108]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd9b927580 a2=0 a3=0 items=0 ppid=1997 pid=2108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.575000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 15:26:19.578000 audit[2110]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2110 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:26:19.578000 audit[2110]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffe74b3730 a2=0 a3=0 items=0 ppid=1997 pid=2110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.578000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 15:26:19.581000 audit[2112]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2112 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:26:19.581000 audit[2112]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff4cd36d00 a2=0 a3=0 items=0 ppid=1997 pid=2112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.581000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 15:26:19.584000 audit[2114]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2114 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:26:19.584000 audit[2114]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd2b8f38f0 a2=0 a3=0 items=0 ppid=1997 pid=2114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.584000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 15:26:19.588000 audit[2116]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2116 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:26:19.588000 audit[2116]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fffec6181e0 a2=0 a3=0 items=0 ppid=1997 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.588000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 15:26:19.591000 audit[2118]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2118 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:26:19.591000 audit[2118]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffd7ee74290 a2=0 a3=0 items=0 ppid=1997 pid=2118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.591000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 15:26:19.595000 audit[2120]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2120 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:26:19.595000 audit[2120]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd95f14820 a2=0 a3=0 items=0 ppid=1997 pid=2120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.595000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 15:26:19.598000 audit[2122]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2122 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:26:19.598000 audit[2122]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff821340a0 a2=0 a3=0 items=0 ppid=1997 pid=2122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.598000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 15:26:19.601000 audit[2124]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2124 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:26:19.601000 audit[2124]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffce5743420 a2=0 a3=0 items=0 ppid=1997 pid=2124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.601000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 15:26:19.605000 audit[2126]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2126 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:26:19.605000 audit[2126]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffebe3c81a0 a2=0 a3=0 items=0 ppid=1997 pid=2126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.605000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 15:26:19.614000 audit[2131]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2131 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:26:19.614000 audit[2131]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd6da6d110 a2=0 a3=0 items=0 ppid=1997 pid=2131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.614000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 15:26:19.617000 audit[2133]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2133 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:26:19.617000 audit[2133]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffcd5baa760 a2=0 a3=0 items=0 ppid=1997 pid=2133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.617000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 15:26:19.621000 audit[2135]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2135 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:26:19.621000 audit[2135]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffead741170 a2=0 a3=0 items=0 ppid=1997 pid=2135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.621000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 15:26:19.624000 audit[2137]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2137 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:26:19.624000 audit[2137]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffeb734e550 a2=0 a3=0 items=0 ppid=1997 pid=2137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.624000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 15:26:19.627000 audit[2139]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2139 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:26:19.627000 audit[2139]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fffa77daad0 a2=0 a3=0 items=0 ppid=1997 pid=2139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.627000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 15:26:19.631000 audit[2141]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2141 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:26:19.631000 audit[2141]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc1e5fb910 a2=0 a3=0 items=0 ppid=1997 pid=2141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.631000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 15:26:19.646469 systemd-timesyncd[1551]: Network configuration changed, trying to establish connection. Dec 16 15:26:19.661000 audit[2146]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2146 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:26:19.661000 audit[2146]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffcdd1cafe0 a2=0 a3=0 items=0 ppid=1997 pid=2146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.661000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 15:26:19.667000 audit[2148]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2148 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:26:19.667000 audit[2148]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffcf6b9bbd0 a2=0 a3=0 items=0 ppid=1997 pid=2148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.667000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 15:26:19.681000 audit[2156]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2156 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:26:19.681000 audit[2156]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffca3e9c840 a2=0 a3=0 items=0 ppid=1997 pid=2156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.681000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 15:26:19.705000 audit[2162]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2162 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:26:19.705000 audit[2162]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff64fc7290 a2=0 a3=0 items=0 ppid=1997 pid=2162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.705000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 15:26:19.709000 audit[2164]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2164 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:26:19.709000 audit[2164]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffd9eb5ac30 a2=0 a3=0 items=0 ppid=1997 pid=2164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.709000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 15:26:19.713000 audit[2166]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2166 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:26:19.713000 audit[2166]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff20a1cc90 a2=0 a3=0 items=0 ppid=1997 pid=2166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.713000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 15:26:19.716000 audit[2168]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2168 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:26:19.716000 audit[2168]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffc481462e0 a2=0 a3=0 items=0 ppid=1997 pid=2168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.716000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 15:26:19.720000 audit[2170]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2170 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:26:19.720000 audit[2170]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd246db220 a2=0 a3=0 items=0 ppid=1997 pid=2170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:19.720000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 15:26:19.722670 systemd-networkd[1577]: docker0: Link UP Dec 16 15:26:19.727755 dockerd[1997]: time="2025-12-16T15:26:19.727646747Z" level=info msg="Loading containers: done." Dec 16 15:26:19.758426 dockerd[1997]: time="2025-12-16T15:26:19.758253689Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 15:26:19.758426 dockerd[1997]: time="2025-12-16T15:26:19.758399236Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 15:26:19.758714 dockerd[1997]: time="2025-12-16T15:26:19.758560894Z" level=info msg="Initializing buildkit" Dec 16 15:26:19.804121 dockerd[1997]: time="2025-12-16T15:26:19.804030332Z" level=info msg="Completed buildkit initialization" Dec 16 15:26:19.814006 dockerd[1997]: time="2025-12-16T15:26:19.813745644Z" level=info msg="Daemon has completed initialization" Dec 16 15:26:19.814544 dockerd[1997]: time="2025-12-16T15:26:19.814252181Z" level=info msg="API listen on /run/docker.sock" Dec 16 15:26:19.814329 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 15:26:19.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:26:19.881596 systemd-timesyncd[1551]: Contacted time server [2a00:fd80:aaaa:ffff::eeee:ff1]:123 (2.flatcar.pool.ntp.org). Dec 16 15:26:19.881984 systemd-timesyncd[1551]: Initial clock synchronization to Tue 2025-12-16 15:26:19.983345 UTC. Dec 16 15:26:20.241883 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3597243217-merged.mount: Deactivated successfully. Dec 16 15:26:20.818488 containerd[1667]: time="2025-12-16T15:26:20.818363591Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 16 15:26:21.825242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3721585698.mount: Deactivated successfully. Dec 16 15:26:23.892671 containerd[1667]: time="2025-12-16T15:26:23.892578157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:26:23.893799 containerd[1667]: time="2025-12-16T15:26:23.893733968Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=25399329" Dec 16 15:26:23.895041 containerd[1667]: time="2025-12-16T15:26:23.895004305Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:26:23.898824 containerd[1667]: time="2025-12-16T15:26:23.898790037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:26:23.900410 containerd[1667]: time="2025-12-16T15:26:23.900369548Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 3.081866442s" Dec 16 15:26:23.900599 containerd[1667]: time="2025-12-16T15:26:23.900557893Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Dec 16 15:26:23.902278 containerd[1667]: time="2025-12-16T15:26:23.902230338Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 16 15:26:26.224906 containerd[1667]: time="2025-12-16T15:26:26.223442598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:26:26.227330 containerd[1667]: time="2025-12-16T15:26:26.227273514Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21154285" Dec 16 15:26:26.229072 containerd[1667]: time="2025-12-16T15:26:26.229032351Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:26:26.236393 containerd[1667]: time="2025-12-16T15:26:26.236315491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:26:26.237801 containerd[1667]: time="2025-12-16T15:26:26.237764822Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 2.335465894s" Dec 16 15:26:26.237951 containerd[1667]: time="2025-12-16T15:26:26.237923928Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Dec 16 15:26:26.239193 containerd[1667]: time="2025-12-16T15:26:26.239144529Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 16 15:26:26.502464 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 16 15:26:26.514330 kernel: kauditd_printk_skb: 134 callbacks suppressed Dec 16 15:26:26.514425 kernel: audit: type=1131 audit(1765898786.501:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:26:26.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:26:26.527000 audit: BPF prog-id=61 op=UNLOAD Dec 16 15:26:26.530565 kernel: audit: type=1334 audit(1765898786.527:287): prog-id=61 op=UNLOAD Dec 16 15:26:28.067993 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 15:26:28.072944 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 15:26:28.324892 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 15:26:28.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:26:28.331573 kernel: audit: type=1130 audit(1765898788.323:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:26:28.342576 (kubelet)[2288]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 15:26:28.441549 containerd[1667]: time="2025-12-16T15:26:28.440640872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:26:28.445879 containerd[1667]: time="2025-12-16T15:26:28.445755701Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15717792" Dec 16 15:26:28.446903 containerd[1667]: time="2025-12-16T15:26:28.446798804Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:26:28.450840 kubelet[2288]: E1216 15:26:28.450743 2288 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 15:26:28.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 15:26:28.454900 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 15:26:28.455203 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 15:26:28.455942 systemd[1]: kubelet.service: Consumed 273ms CPU time, 108.5M memory peak. Dec 16 15:26:28.461616 kernel: audit: type=1131 audit(1765898788.455:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 15:26:28.463754 containerd[1667]: time="2025-12-16T15:26:28.463671139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:26:28.466759 containerd[1667]: time="2025-12-16T15:26:28.466450657Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 2.227060395s" Dec 16 15:26:28.466759 containerd[1667]: time="2025-12-16T15:26:28.466511784Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Dec 16 15:26:28.468452 containerd[1667]: time="2025-12-16T15:26:28.468147064Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 16 15:26:30.314533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3101370283.mount: Deactivated successfully. Dec 16 15:26:31.058255 containerd[1667]: time="2025-12-16T15:26:31.058176453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:26:31.060649 containerd[1667]: time="2025-12-16T15:26:31.060604863Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25961571" Dec 16 15:26:31.061678 containerd[1667]: time="2025-12-16T15:26:31.061606190Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:26:31.064047 containerd[1667]: time="2025-12-16T15:26:31.063988541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:26:31.065214 containerd[1667]: time="2025-12-16T15:26:31.064937622Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 2.596750264s" Dec 16 15:26:31.065214 containerd[1667]: time="2025-12-16T15:26:31.064991246Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Dec 16 15:26:31.065765 containerd[1667]: time="2025-12-16T15:26:31.065732745Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 16 15:26:31.801085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2271048908.mount: Deactivated successfully. Dec 16 15:26:33.430602 containerd[1667]: time="2025-12-16T15:26:33.430506241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:26:33.432415 containerd[1667]: time="2025-12-16T15:26:33.432092610Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=21691199" Dec 16 15:26:33.433243 containerd[1667]: time="2025-12-16T15:26:33.433199294Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:26:33.436883 containerd[1667]: time="2025-12-16T15:26:33.436840916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:26:33.438400 containerd[1667]: time="2025-12-16T15:26:33.438363582Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.372586087s" Dec 16 15:26:33.438558 containerd[1667]: time="2025-12-16T15:26:33.438503842Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Dec 16 15:26:33.439547 containerd[1667]: time="2025-12-16T15:26:33.439421419Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 16 15:26:34.215931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3341989994.mount: Deactivated successfully. Dec 16 15:26:34.227545 containerd[1667]: time="2025-12-16T15:26:34.226483218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:26:34.230010 containerd[1667]: time="2025-12-16T15:26:34.229966283Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Dec 16 15:26:34.230944 containerd[1667]: time="2025-12-16T15:26:34.230910250Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:26:34.234986 containerd[1667]: time="2025-12-16T15:26:34.234950191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:26:34.237495 containerd[1667]: time="2025-12-16T15:26:34.237444710Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 797.89598ms" Dec 16 15:26:34.237679 containerd[1667]: time="2025-12-16T15:26:34.237651200Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Dec 16 15:26:34.238929 containerd[1667]: time="2025-12-16T15:26:34.238593151Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 16 15:26:34.989345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1202067881.mount: Deactivated successfully. Dec 16 15:26:38.567601 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 16 15:26:38.573789 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 15:26:38.864848 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 15:26:38.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:26:38.870618 kernel: audit: type=1130 audit(1765898798.864:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:26:38.880281 (kubelet)[2416]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 15:26:38.963338 kubelet[2416]: E1216 15:26:38.963258 2416 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 15:26:38.966366 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 15:26:38.966999 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 15:26:38.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 15:26:38.968392 systemd[1]: kubelet.service: Consumed 236ms CPU time, 108.5M memory peak. Dec 16 15:26:38.972552 kernel: audit: type=1131 audit(1765898798.967:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 15:26:39.412198 update_engine[1642]: I20251216 15:26:39.410733 1642 update_attempter.cc:509] Updating boot flags... Dec 16 15:26:44.111086 containerd[1667]: time="2025-12-16T15:26:44.109645948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:26:44.111807 containerd[1667]: time="2025-12-16T15:26:44.111703918Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=72348001" Dec 16 15:26:44.113676 containerd[1667]: time="2025-12-16T15:26:44.113622673Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:26:44.122139 containerd[1667]: time="2025-12-16T15:26:44.122020798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:26:44.123528 containerd[1667]: time="2025-12-16T15:26:44.123455571Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 9.884825255s" Dec 16 15:26:44.123661 containerd[1667]: time="2025-12-16T15:26:44.123633488Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Dec 16 15:26:49.067134 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 16 15:26:49.072615 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 15:26:49.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:26:49.400775 kernel: audit: type=1130 audit(1765898809.373:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:26:49.373813 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 15:26:49.415731 (kubelet)[2473]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 15:26:49.492534 kubelet[2473]: E1216 15:26:49.491143 2473 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 15:26:49.494990 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 15:26:49.495266 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 15:26:49.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 15:26:49.499943 systemd[1]: kubelet.service: Consumed 224ms CPU time, 108.2M memory peak. Dec 16 15:26:49.500769 kernel: audit: type=1131 audit(1765898809.495:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 15:26:50.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:26:50.010202 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 15:26:50.010500 systemd[1]: kubelet.service: Consumed 224ms CPU time, 108.2M memory peak. Dec 16 15:26:50.016603 kernel: audit: type=1130 audit(1765898810.008:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:26:50.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:26:50.022693 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 15:26:50.024549 kernel: audit: type=1131 audit(1765898810.008:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:26:50.072397 systemd[1]: Reload requested from client PID 2487 ('systemctl') (unit session-11.scope)... Dec 16 15:26:50.072931 systemd[1]: Reloading... Dec 16 15:26:50.247576 zram_generator::config[2535]: No configuration found. Dec 16 15:26:50.601188 systemd[1]: Reloading finished in 527 ms. Dec 16 15:26:50.653544 kernel: audit: type=1334 audit(1765898810.646:296): prog-id=65 op=LOAD Dec 16 15:26:50.646000 audit: BPF prog-id=65 op=LOAD Dec 16 15:26:50.658536 kernel: audit: type=1334 audit(1765898810.646:297): prog-id=57 op=UNLOAD Dec 16 15:26:50.646000 audit: BPF prog-id=57 op=UNLOAD Dec 16 15:26:50.651000 audit: BPF prog-id=66 op=LOAD Dec 16 15:26:50.660550 kernel: audit: type=1334 audit(1765898810.651:298): prog-id=66 op=LOAD Dec 16 15:26:50.651000 audit: BPF prog-id=50 op=UNLOAD Dec 16 15:26:50.651000 audit: BPF prog-id=67 op=LOAD Dec 16 15:26:50.663982 kernel: audit: type=1334 audit(1765898810.651:299): prog-id=50 op=UNLOAD Dec 16 15:26:50.664066 kernel: audit: type=1334 audit(1765898810.651:300): prog-id=67 op=LOAD Dec 16 15:26:50.664115 kernel: audit: type=1334 audit(1765898810.651:301): prog-id=68 op=LOAD Dec 16 15:26:50.651000 audit: BPF prog-id=68 op=LOAD Dec 16 15:26:50.651000 audit: BPF prog-id=51 op=UNLOAD Dec 16 15:26:50.651000 audit: BPF prog-id=52 op=UNLOAD Dec 16 15:26:50.656000 audit: BPF prog-id=69 op=LOAD Dec 16 15:26:50.656000 audit: BPF prog-id=58 op=UNLOAD Dec 16 15:26:50.656000 audit: BPF prog-id=70 op=LOAD Dec 16 15:26:50.656000 audit: BPF prog-id=71 op=LOAD Dec 16 15:26:50.656000 audit: BPF prog-id=59 op=UNLOAD Dec 16 15:26:50.656000 audit: BPF prog-id=60 op=UNLOAD Dec 16 15:26:50.659000 audit: BPF prog-id=72 op=LOAD Dec 16 15:26:50.664000 audit: BPF prog-id=41 op=UNLOAD Dec 16 15:26:50.664000 audit: BPF prog-id=73 op=LOAD Dec 16 15:26:50.664000 audit: BPF prog-id=74 op=LOAD Dec 16 15:26:50.664000 audit: BPF prog-id=42 op=UNLOAD Dec 16 15:26:50.664000 audit: BPF prog-id=43 op=UNLOAD Dec 16 15:26:50.665000 audit: BPF prog-id=75 op=LOAD Dec 16 15:26:50.665000 audit: BPF prog-id=44 op=UNLOAD Dec 16 15:26:50.665000 audit: BPF prog-id=76 op=LOAD Dec 16 15:26:50.665000 audit: BPF prog-id=77 op=LOAD Dec 16 15:26:50.665000 audit: BPF prog-id=45 op=UNLOAD Dec 16 15:26:50.665000 audit: BPF prog-id=46 op=UNLOAD Dec 16 15:26:50.667000 audit: BPF prog-id=78 op=LOAD Dec 16 15:26:50.667000 audit: BPF prog-id=53 op=UNLOAD Dec 16 15:26:50.668000 audit: BPF prog-id=79 op=LOAD Dec 16 15:26:50.668000 audit: BPF prog-id=47 op=UNLOAD Dec 16 15:26:50.668000 audit: BPF prog-id=80 op=LOAD Dec 16 15:26:50.668000 audit: BPF prog-id=81 op=LOAD Dec 16 15:26:50.668000 audit: BPF prog-id=48 op=UNLOAD Dec 16 15:26:50.668000 audit: BPF prog-id=49 op=UNLOAD Dec 16 15:26:50.669000 audit: BPF prog-id=82 op=LOAD Dec 16 15:26:50.669000 audit: BPF prog-id=56 op=UNLOAD Dec 16 15:26:50.670000 audit: BPF prog-id=83 op=LOAD Dec 16 15:26:50.670000 audit: BPF prog-id=84 op=LOAD Dec 16 15:26:50.670000 audit: BPF prog-id=54 op=UNLOAD Dec 16 15:26:50.670000 audit: BPF prog-id=55 op=UNLOAD Dec 16 15:26:50.673000 audit: BPF prog-id=85 op=LOAD Dec 16 15:26:50.673000 audit: BPF prog-id=64 op=UNLOAD Dec 16 15:26:50.696109 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 15:26:50.696251 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 15:26:50.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 15:26:50.696736 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 15:26:50.696855 systemd[1]: kubelet.service: Consumed 148ms CPU time, 98.2M memory peak. Dec 16 15:26:50.699298 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 15:26:50.907832 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 15:26:50.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:26:50.920032 (kubelet)[2602]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 15:26:51.061819 kubelet[2602]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 15:26:51.061819 kubelet[2602]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 15:26:51.069125 kubelet[2602]: I1216 15:26:51.068659 2602 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 15:26:51.433056 kubelet[2602]: I1216 15:26:51.432833 2602 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 15:26:51.433056 kubelet[2602]: I1216 15:26:51.432891 2602 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 15:26:51.436541 kubelet[2602]: I1216 15:26:51.436038 2602 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 15:26:51.436541 kubelet[2602]: I1216 15:26:51.436097 2602 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 15:26:51.436541 kubelet[2602]: I1216 15:26:51.436465 2602 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 15:26:51.462545 kubelet[2602]: I1216 15:26:51.462471 2602 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 15:26:51.466870 kubelet[2602]: E1216 15:26:51.466752 2602 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.25.166:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.25.166:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 15:26:51.484184 kubelet[2602]: I1216 15:26:51.484123 2602 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 15:26:51.502138 kubelet[2602]: I1216 15:26:51.502004 2602 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 15:26:51.504247 kubelet[2602]: I1216 15:26:51.504111 2602 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 15:26:51.505845 kubelet[2602]: I1216 15:26:51.504189 2602 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-g2i2t.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 15:26:51.506151 kubelet[2602]: I1216 15:26:51.505873 2602 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 15:26:51.506151 kubelet[2602]: I1216 15:26:51.505896 2602 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 15:26:51.506151 kubelet[2602]: I1216 15:26:51.506090 2602 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 15:26:51.514155 kubelet[2602]: I1216 15:26:51.514101 2602 state_mem.go:36] "Initialized new in-memory state store" Dec 16 15:26:51.514795 kubelet[2602]: I1216 15:26:51.514771 2602 kubelet.go:475] "Attempting to sync node with API server" Dec 16 15:26:51.515137 kubelet[2602]: I1216 15:26:51.515095 2602 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 15:26:51.518000 kubelet[2602]: E1216 15:26:51.517950 2602 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.25.166:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-g2i2t.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.25.166:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 15:26:51.518953 kubelet[2602]: I1216 15:26:51.518847 2602 kubelet.go:387] "Adding apiserver pod source" Dec 16 15:26:51.518953 kubelet[2602]: I1216 15:26:51.518928 2602 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 15:26:51.525323 kubelet[2602]: E1216 15:26:51.525271 2602 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.25.166:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.25.166:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 15:26:51.526160 kubelet[2602]: I1216 15:26:51.525975 2602 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 15:26:51.530539 kubelet[2602]: I1216 15:26:51.530475 2602 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 15:26:51.530988 kubelet[2602]: I1216 15:26:51.530966 2602 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 15:26:51.535910 kubelet[2602]: W1216 15:26:51.535877 2602 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 15:26:51.542809 kubelet[2602]: I1216 15:26:51.542212 2602 server.go:1262] "Started kubelet" Dec 16 15:26:51.545703 kubelet[2602]: I1216 15:26:51.545678 2602 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 15:26:51.551608 kubelet[2602]: E1216 15:26:51.549127 2602 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.25.166:6443/api/v1/namespaces/default/events\": dial tcp 10.230.25.166:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-g2i2t.gb1.brightbox.com.1881bb9ceb410ed2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-g2i2t.gb1.brightbox.com,UID:srv-g2i2t.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-g2i2t.gb1.brightbox.com,},FirstTimestamp:2025-12-16 15:26:51.54214677 +0000 UTC m=+0.558426010,LastTimestamp:2025-12-16 15:26:51.54214677 +0000 UTC m=+0.558426010,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-g2i2t.gb1.brightbox.com,}" Dec 16 15:26:51.551904 kubelet[2602]: I1216 15:26:51.551650 2602 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 15:26:51.557000 audit[2616]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2616 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:26:51.557000 audit[2616]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc66af3e00 a2=0 a3=0 items=0 ppid=2602 pid=2616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:51.557000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 15:26:51.562490 kubelet[2602]: I1216 15:26:51.562459 2602 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 15:26:51.563152 kubelet[2602]: E1216 15:26:51.563121 2602 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"srv-g2i2t.gb1.brightbox.com\" not found" Dec 16 15:26:51.566817 kubelet[2602]: I1216 15:26:51.566783 2602 server.go:310] "Adding debug handlers to kubelet server" Dec 16 15:26:51.567181 kubelet[2602]: I1216 15:26:51.567156 2602 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 15:26:51.567366 kubelet[2602]: I1216 15:26:51.567345 2602 reconciler.go:29] "Reconciler: start to sync state" Dec 16 15:26:51.568000 audit[2618]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2618 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:26:51.568000 audit[2618]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa9d2f6e0 a2=0 a3=0 items=0 ppid=2602 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:51.568000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 15:26:51.576038 kubelet[2602]: I1216 15:26:51.575243 2602 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 15:26:51.576038 kubelet[2602]: I1216 15:26:51.575345 2602 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 15:26:51.576038 kubelet[2602]: I1216 15:26:51.575822 2602 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 15:26:51.578000 audit[2621]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2621 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:26:51.578000 audit[2621]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff6b64db30 a2=0 a3=0 items=0 ppid=2602 pid=2621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:51.578000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 15:26:51.582830 kubelet[2602]: I1216 15:26:51.581647 2602 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 15:26:51.584000 audit[2623]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2623 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:26:51.584000 audit[2623]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffdf44448b0 a2=0 a3=0 items=0 ppid=2602 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:51.584000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 15:26:51.586883 kubelet[2602]: E1216 15:26:51.586343 2602 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.25.166:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.25.166:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 15:26:51.586883 kubelet[2602]: E1216 15:26:51.586465 2602 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.25.166:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-g2i2t.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.25.166:6443: connect: connection refused" interval="200ms" Dec 16 15:26:51.589479 kubelet[2602]: I1216 15:26:51.589426 2602 factory.go:223] Registration of the systemd container factory successfully Dec 16 15:26:51.590705 kubelet[2602]: I1216 15:26:51.590647 2602 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 15:26:51.592569 kubelet[2602]: I1216 15:26:51.592543 2602 factory.go:223] Registration of the containerd container factory successfully Dec 16 15:26:51.603000 audit[2626]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2626 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:26:51.603000 audit[2626]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff265b78c0 a2=0 a3=0 items=0 ppid=2602 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:51.603000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Dec 16 15:26:51.605401 kubelet[2602]: I1216 15:26:51.605355 2602 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 15:26:51.606000 audit[2627]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2627 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:26:51.606000 audit[2627]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd755b7f30 a2=0 a3=0 items=0 ppid=2602 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:51.606000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 15:26:51.609137 kubelet[2602]: I1216 15:26:51.608339 2602 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 15:26:51.609137 kubelet[2602]: I1216 15:26:51.608372 2602 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 15:26:51.609137 kubelet[2602]: I1216 15:26:51.608432 2602 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 15:26:51.609137 kubelet[2602]: E1216 15:26:51.608506 2602 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 15:26:51.610000 audit[2629]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2629 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:26:51.610000 audit[2629]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcdb6a9bf0 a2=0 a3=0 items=0 ppid=2602 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:51.610000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 15:26:51.611000 audit[2630]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2630 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:26:51.611000 audit[2630]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd8924c960 a2=0 a3=0 items=0 ppid=2602 pid=2630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:51.611000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 15:26:51.613000 audit[2631]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2631 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:26:51.613000 audit[2631]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe20fbdaf0 a2=0 a3=0 items=0 ppid=2602 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:51.613000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 15:26:51.615000 audit[2632]: NETFILTER_CFG table=mangle:51 family=10 entries=1 op=nft_register_chain pid=2632 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:26:51.615000 audit[2632]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd461f0360 a2=0 a3=0 items=0 ppid=2602 pid=2632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:51.615000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 15:26:51.617000 audit[2633]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2633 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:26:51.617000 audit[2633]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffea8d66780 a2=0 a3=0 items=0 ppid=2602 pid=2633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:51.617000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 15:26:51.620000 audit[2634]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2634 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:26:51.620000 audit[2634]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffddd61e290 a2=0 a3=0 items=0 ppid=2602 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:51.620000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 15:26:51.629060 kubelet[2602]: E1216 15:26:51.628839 2602 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.25.166:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.25.166:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 15:26:51.651607 kubelet[2602]: I1216 15:26:51.651571 2602 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 15:26:51.652443 kubelet[2602]: I1216 15:26:51.651764 2602 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 15:26:51.652443 kubelet[2602]: I1216 15:26:51.651893 2602 state_mem.go:36] "Initialized new in-memory state store" Dec 16 15:26:51.656342 kubelet[2602]: I1216 15:26:51.656316 2602 policy_none.go:49] "None policy: Start" Dec 16 15:26:51.656469 kubelet[2602]: I1216 15:26:51.656448 2602 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 15:26:51.656626 kubelet[2602]: I1216 15:26:51.656605 2602 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 15:26:51.659480 kubelet[2602]: I1216 15:26:51.658804 2602 policy_none.go:47] "Start" Dec 16 15:26:51.663725 kubelet[2602]: E1216 15:26:51.663694 2602 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"srv-g2i2t.gb1.brightbox.com\" not found" Dec 16 15:26:51.668825 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 15:26:51.688160 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 15:26:51.697954 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 15:26:51.708843 kubelet[2602]: E1216 15:26:51.708808 2602 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 15:26:51.709155 kubelet[2602]: E1216 15:26:51.708848 2602 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 15:26:51.709613 kubelet[2602]: I1216 15:26:51.709472 2602 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 15:26:51.709999 kubelet[2602]: I1216 15:26:51.709937 2602 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 15:26:51.710892 kubelet[2602]: I1216 15:26:51.710417 2602 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 15:26:51.713918 kubelet[2602]: E1216 15:26:51.713892 2602 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 15:26:51.714975 kubelet[2602]: E1216 15:26:51.714198 2602 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-g2i2t.gb1.brightbox.com\" not found" Dec 16 15:26:51.788304 kubelet[2602]: E1216 15:26:51.788215 2602 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.25.166:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-g2i2t.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.25.166:6443: connect: connection refused" interval="400ms" Dec 16 15:26:51.813541 kubelet[2602]: I1216 15:26:51.813279 2602 kubelet_node_status.go:75] "Attempting to register node" node="srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:51.814085 kubelet[2602]: E1216 15:26:51.814049 2602 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.25.166:6443/api/v1/nodes\": dial tcp 10.230.25.166:6443: connect: connection refused" node="srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:51.930066 systemd[1]: Created slice kubepods-burstable-pod2b48045d36a0aab2c6190a017d8aee74.slice - libcontainer container kubepods-burstable-pod2b48045d36a0aab2c6190a017d8aee74.slice. Dec 16 15:26:51.961954 kubelet[2602]: E1216 15:26:51.961573 2602 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-g2i2t.gb1.brightbox.com\" not found" node="srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:51.967877 systemd[1]: Created slice kubepods-burstable-pode69ce0004b5ae706396b4ed4823bccb1.slice - libcontainer container kubepods-burstable-pode69ce0004b5ae706396b4ed4823bccb1.slice. Dec 16 15:26:51.968924 kubelet[2602]: I1216 15:26:51.968887 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e10106eb42f2b5319f7b2ab23688d25a-kubeconfig\") pod \"kube-scheduler-srv-g2i2t.gb1.brightbox.com\" (UID: \"e10106eb42f2b5319f7b2ab23688d25a\") " pod="kube-system/kube-scheduler-srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:51.985023 kubelet[2602]: E1216 15:26:51.984979 2602 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-g2i2t.gb1.brightbox.com\" not found" node="srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:51.989880 systemd[1]: Created slice kubepods-burstable-pode10106eb42f2b5319f7b2ab23688d25a.slice - libcontainer container kubepods-burstable-pode10106eb42f2b5319f7b2ab23688d25a.slice. Dec 16 15:26:51.994426 kubelet[2602]: E1216 15:26:51.994395 2602 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-g2i2t.gb1.brightbox.com\" not found" node="srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:52.017174 kubelet[2602]: I1216 15:26:52.017133 2602 kubelet_node_status.go:75] "Attempting to register node" node="srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:52.017785 kubelet[2602]: E1216 15:26:52.017696 2602 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.25.166:6443/api/v1/nodes\": dial tcp 10.230.25.166:6443: connect: connection refused" node="srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:52.069322 kubelet[2602]: I1216 15:26:52.069247 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2b48045d36a0aab2c6190a017d8aee74-k8s-certs\") pod \"kube-apiserver-srv-g2i2t.gb1.brightbox.com\" (UID: \"2b48045d36a0aab2c6190a017d8aee74\") " pod="kube-system/kube-apiserver-srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:52.069322 kubelet[2602]: I1216 15:26:52.069305 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2b48045d36a0aab2c6190a017d8aee74-usr-share-ca-certificates\") pod \"kube-apiserver-srv-g2i2t.gb1.brightbox.com\" (UID: \"2b48045d36a0aab2c6190a017d8aee74\") " pod="kube-system/kube-apiserver-srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:52.069980 kubelet[2602]: I1216 15:26:52.069341 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e69ce0004b5ae706396b4ed4823bccb1-k8s-certs\") pod \"kube-controller-manager-srv-g2i2t.gb1.brightbox.com\" (UID: \"e69ce0004b5ae706396b4ed4823bccb1\") " pod="kube-system/kube-controller-manager-srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:52.069980 kubelet[2602]: I1216 15:26:52.069368 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e69ce0004b5ae706396b4ed4823bccb1-kubeconfig\") pod \"kube-controller-manager-srv-g2i2t.gb1.brightbox.com\" (UID: \"e69ce0004b5ae706396b4ed4823bccb1\") " pod="kube-system/kube-controller-manager-srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:52.069980 kubelet[2602]: I1216 15:26:52.069395 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e69ce0004b5ae706396b4ed4823bccb1-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-g2i2t.gb1.brightbox.com\" (UID: \"e69ce0004b5ae706396b4ed4823bccb1\") " pod="kube-system/kube-controller-manager-srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:52.069980 kubelet[2602]: I1216 15:26:52.069420 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2b48045d36a0aab2c6190a017d8aee74-ca-certs\") pod \"kube-apiserver-srv-g2i2t.gb1.brightbox.com\" (UID: \"2b48045d36a0aab2c6190a017d8aee74\") " pod="kube-system/kube-apiserver-srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:52.069980 kubelet[2602]: I1216 15:26:52.069477 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e69ce0004b5ae706396b4ed4823bccb1-ca-certs\") pod \"kube-controller-manager-srv-g2i2t.gb1.brightbox.com\" (UID: \"e69ce0004b5ae706396b4ed4823bccb1\") " pod="kube-system/kube-controller-manager-srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:52.070232 kubelet[2602]: I1216 15:26:52.069545 2602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e69ce0004b5ae706396b4ed4823bccb1-flexvolume-dir\") pod \"kube-controller-manager-srv-g2i2t.gb1.brightbox.com\" (UID: \"e69ce0004b5ae706396b4ed4823bccb1\") " pod="kube-system/kube-controller-manager-srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:52.191277 kubelet[2602]: E1216 15:26:52.191180 2602 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.25.166:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-g2i2t.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.25.166:6443: connect: connection refused" interval="800ms" Dec 16 15:26:52.268349 containerd[1667]: time="2025-12-16T15:26:52.267581808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-g2i2t.gb1.brightbox.com,Uid:2b48045d36a0aab2c6190a017d8aee74,Namespace:kube-system,Attempt:0,}" Dec 16 15:26:52.329891 containerd[1667]: time="2025-12-16T15:26:52.329756815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-g2i2t.gb1.brightbox.com,Uid:e69ce0004b5ae706396b4ed4823bccb1,Namespace:kube-system,Attempt:0,}" Dec 16 15:26:52.336938 containerd[1667]: time="2025-12-16T15:26:52.336713056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-g2i2t.gb1.brightbox.com,Uid:e10106eb42f2b5319f7b2ab23688d25a,Namespace:kube-system,Attempt:0,}" Dec 16 15:26:52.427344 kubelet[2602]: I1216 15:26:52.427149 2602 kubelet_node_status.go:75] "Attempting to register node" node="srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:52.428019 kubelet[2602]: E1216 15:26:52.427978 2602 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.25.166:6443/api/v1/nodes\": dial tcp 10.230.25.166:6443: connect: connection refused" node="srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:52.705382 kubelet[2602]: E1216 15:26:52.705303 2602 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.25.166:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-g2i2t.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.25.166:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 15:26:52.737647 kubelet[2602]: E1216 15:26:52.737558 2602 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.25.166:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.25.166:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 15:26:52.767998 kubelet[2602]: E1216 15:26:52.767938 2602 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.25.166:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.25.166:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 15:26:52.993953 kubelet[2602]: E1216 15:26:52.992622 2602 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.25.166:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-g2i2t.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.25.166:6443: connect: connection refused" interval="1.6s" Dec 16 15:26:53.021768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3729767930.mount: Deactivated successfully. Dec 16 15:26:53.057328 containerd[1667]: time="2025-12-16T15:26:53.057142611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 15:26:53.072561 kubelet[2602]: E1216 15:26:53.072453 2602 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.25.166:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.25.166:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 15:26:53.081937 containerd[1667]: time="2025-12-16T15:26:53.081698375Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 15:26:53.093075 containerd[1667]: time="2025-12-16T15:26:53.084379610Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 15:26:53.107467 containerd[1667]: time="2025-12-16T15:26:53.107370716Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 15:26:53.109000 containerd[1667]: time="2025-12-16T15:26:53.108913204Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 15:26:53.110007 containerd[1667]: time="2025-12-16T15:26:53.109971618Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 15:26:53.118699 containerd[1667]: time="2025-12-16T15:26:53.118626790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 15:26:53.119859 containerd[1667]: time="2025-12-16T15:26:53.119404153Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 758.642531ms" Dec 16 15:26:53.123859 containerd[1667]: time="2025-12-16T15:26:53.123822661Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 15:26:53.142086 containerd[1667]: time="2025-12-16T15:26:53.142023165Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 781.27367ms" Dec 16 15:26:53.146149 containerd[1667]: time="2025-12-16T15:26:53.146110105Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 811.471458ms" Dec 16 15:26:53.258990 kubelet[2602]: I1216 15:26:53.256966 2602 kubelet_node_status.go:75] "Attempting to register node" node="srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:53.258990 kubelet[2602]: E1216 15:26:53.258899 2602 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.25.166:6443/api/v1/nodes\": dial tcp 10.230.25.166:6443: connect: connection refused" node="srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:53.347210 containerd[1667]: time="2025-12-16T15:26:53.347086441Z" level=info msg="connecting to shim 29848eafe3fa4725e2426e2b48444840616a73c2e71941a9bc6b7a4939104c37" address="unix:///run/containerd/s/75df26f2074f6d2ca27d4101bc1c7f408a0b6bf1105ca5f63630029628bed66b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 15:26:53.350338 containerd[1667]: time="2025-12-16T15:26:53.350268788Z" level=info msg="connecting to shim b8641f6632027a7a614ba3ac49144c8e7830e3fff11c31c8bcc7605857ecc4aa" address="unix:///run/containerd/s/e9b039510f958f0b2212313694ca8ea3e2e4b5c7002e2f17a9def0075025484a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 15:26:53.351121 containerd[1667]: time="2025-12-16T15:26:53.351089315Z" level=info msg="connecting to shim 49983e6915991eec1347b94b1774aeb833ef4a0c05e78887eb89259a143ceada" address="unix:///run/containerd/s/b6dff95e506fcc8b617d7a12b7284c13cc636b2d52eee2ce7937996ea3cfc0f0" namespace=k8s.io protocol=ttrpc version=3 Dec 16 15:26:53.464906 systemd[1]: Started cri-containerd-29848eafe3fa4725e2426e2b48444840616a73c2e71941a9bc6b7a4939104c37.scope - libcontainer container 29848eafe3fa4725e2426e2b48444840616a73c2e71941a9bc6b7a4939104c37. Dec 16 15:26:53.468923 systemd[1]: Started cri-containerd-49983e6915991eec1347b94b1774aeb833ef4a0c05e78887eb89259a143ceada.scope - libcontainer container 49983e6915991eec1347b94b1774aeb833ef4a0c05e78887eb89259a143ceada. Dec 16 15:26:53.472669 systemd[1]: Started cri-containerd-b8641f6632027a7a614ba3ac49144c8e7830e3fff11c31c8bcc7605857ecc4aa.scope - libcontainer container b8641f6632027a7a614ba3ac49144c8e7830e3fff11c31c8bcc7605857ecc4aa. Dec 16 15:26:53.507000 audit: BPF prog-id=86 op=LOAD Dec 16 15:26:53.509000 audit: BPF prog-id=87 op=LOAD Dec 16 15:26:53.509000 audit[2704]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2677 pid=2704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238363431663636333230323761376136313462613361633439313434 Dec 16 15:26:53.510000 audit: BPF prog-id=87 op=UNLOAD Dec 16 15:26:53.510000 audit[2704]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2677 pid=2704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.510000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238363431663636333230323761376136313462613361633439313434 Dec 16 15:26:53.511000 audit: BPF prog-id=88 op=LOAD Dec 16 15:26:53.511000 audit[2704]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2677 pid=2704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238363431663636333230323761376136313462613361633439313434 Dec 16 15:26:53.511000 audit: BPF prog-id=89 op=LOAD Dec 16 15:26:53.511000 audit[2704]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2677 pid=2704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238363431663636333230323761376136313462613361633439313434 Dec 16 15:26:53.511000 audit: BPF prog-id=89 op=UNLOAD Dec 16 15:26:53.511000 audit[2704]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2677 pid=2704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238363431663636333230323761376136313462613361633439313434 Dec 16 15:26:53.511000 audit: BPF prog-id=88 op=UNLOAD Dec 16 15:26:53.511000 audit[2704]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2677 pid=2704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238363431663636333230323761376136313462613361633439313434 Dec 16 15:26:53.511000 audit: BPF prog-id=90 op=LOAD Dec 16 15:26:53.511000 audit[2704]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2677 pid=2704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238363431663636333230323761376136313462613361633439313434 Dec 16 15:26:53.513000 audit: BPF prog-id=91 op=LOAD Dec 16 15:26:53.514000 audit: BPF prog-id=92 op=LOAD Dec 16 15:26:53.514000 audit[2706]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017e238 a2=98 a3=0 items=0 ppid=2678 pid=2706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439393833653639313539393165656331333437623934623137373461 Dec 16 15:26:53.515000 audit: BPF prog-id=92 op=UNLOAD Dec 16 15:26:53.515000 audit[2706]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2678 pid=2706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.515000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439393833653639313539393165656331333437623934623137373461 Dec 16 15:26:53.515000 audit: BPF prog-id=93 op=LOAD Dec 16 15:26:53.515000 audit[2706]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017e488 a2=98 a3=0 items=0 ppid=2678 pid=2706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.515000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439393833653639313539393165656331333437623934623137373461 Dec 16 15:26:53.515000 audit: BPF prog-id=94 op=LOAD Dec 16 15:26:53.515000 audit[2706]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017e218 a2=98 a3=0 items=0 ppid=2678 pid=2706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.515000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439393833653639313539393165656331333437623934623137373461 Dec 16 15:26:53.515000 audit: BPF prog-id=94 op=UNLOAD Dec 16 15:26:53.515000 audit[2706]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2678 pid=2706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.515000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439393833653639313539393165656331333437623934623137373461 Dec 16 15:26:53.515000 audit: BPF prog-id=93 op=UNLOAD Dec 16 15:26:53.515000 audit[2706]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2678 pid=2706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.515000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439393833653639313539393165656331333437623934623137373461 Dec 16 15:26:53.515000 audit: BPF prog-id=95 op=LOAD Dec 16 15:26:53.515000 audit[2706]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017e6e8 a2=98 a3=0 items=0 ppid=2678 pid=2706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.515000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439393833653639313539393165656331333437623934623137373461 Dec 16 15:26:53.517000 audit: BPF prog-id=96 op=LOAD Dec 16 15:26:53.519000 audit: BPF prog-id=97 op=LOAD Dec 16 15:26:53.519000 audit[2697]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2665 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239383438656166653366613437323565323432366532623438343434 Dec 16 15:26:53.519000 audit: BPF prog-id=97 op=UNLOAD Dec 16 15:26:53.519000 audit[2697]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2665 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239383438656166653366613437323565323432366532623438343434 Dec 16 15:26:53.520000 audit: BPF prog-id=98 op=LOAD Dec 16 15:26:53.520000 audit[2697]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2665 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239383438656166653366613437323565323432366532623438343434 Dec 16 15:26:53.520000 audit: BPF prog-id=99 op=LOAD Dec 16 15:26:53.520000 audit[2697]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2665 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239383438656166653366613437323565323432366532623438343434 Dec 16 15:26:53.520000 audit: BPF prog-id=99 op=UNLOAD Dec 16 15:26:53.520000 audit[2697]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2665 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239383438656166653366613437323565323432366532623438343434 Dec 16 15:26:53.520000 audit: BPF prog-id=98 op=UNLOAD Dec 16 15:26:53.520000 audit[2697]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2665 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239383438656166653366613437323565323432366532623438343434 Dec 16 15:26:53.520000 audit: BPF prog-id=100 op=LOAD Dec 16 15:26:53.520000 audit[2697]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2665 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239383438656166653366613437323565323432366532623438343434 Dec 16 15:26:53.571137 kubelet[2602]: E1216 15:26:53.571089 2602 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.25.166:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.25.166:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 15:26:53.606134 containerd[1667]: time="2025-12-16T15:26:53.606079798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-g2i2t.gb1.brightbox.com,Uid:2b48045d36a0aab2c6190a017d8aee74,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8641f6632027a7a614ba3ac49144c8e7830e3fff11c31c8bcc7605857ecc4aa\"" Dec 16 15:26:53.626691 containerd[1667]: time="2025-12-16T15:26:53.626632607Z" level=info msg="CreateContainer within sandbox \"b8641f6632027a7a614ba3ac49144c8e7830e3fff11c31c8bcc7605857ecc4aa\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 15:26:53.630791 containerd[1667]: time="2025-12-16T15:26:53.630638047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-g2i2t.gb1.brightbox.com,Uid:e69ce0004b5ae706396b4ed4823bccb1,Namespace:kube-system,Attempt:0,} returns sandbox id \"29848eafe3fa4725e2426e2b48444840616a73c2e71941a9bc6b7a4939104c37\"" Dec 16 15:26:53.641217 containerd[1667]: time="2025-12-16T15:26:53.641121353Z" level=info msg="CreateContainer within sandbox \"29848eafe3fa4725e2426e2b48444840616a73c2e71941a9bc6b7a4939104c37\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 15:26:53.644680 containerd[1667]: time="2025-12-16T15:26:53.644641669Z" level=info msg="Container 9ebf1802d337bcf1773c15cd1319826d77885eb6d97a4ec11cce863c65c10b2b: CDI devices from CRI Config.CDIDevices: []" Dec 16 15:26:53.653902 containerd[1667]: time="2025-12-16T15:26:53.653827708Z" level=info msg="Container dfc3ef590c28c6b9e4199e1dff927e4fe18b80f8f472e48b333b5c19210bf631: CDI devices from CRI Config.CDIDevices: []" Dec 16 15:26:53.660549 containerd[1667]: time="2025-12-16T15:26:53.660077925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-g2i2t.gb1.brightbox.com,Uid:e10106eb42f2b5319f7b2ab23688d25a,Namespace:kube-system,Attempt:0,} returns sandbox id \"49983e6915991eec1347b94b1774aeb833ef4a0c05e78887eb89259a143ceada\"" Dec 16 15:26:53.666175 containerd[1667]: time="2025-12-16T15:26:53.666102316Z" level=info msg="CreateContainer within sandbox \"b8641f6632027a7a614ba3ac49144c8e7830e3fff11c31c8bcc7605857ecc4aa\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9ebf1802d337bcf1773c15cd1319826d77885eb6d97a4ec11cce863c65c10b2b\"" Dec 16 15:26:53.666575 containerd[1667]: time="2025-12-16T15:26:53.666106512Z" level=info msg="CreateContainer within sandbox \"49983e6915991eec1347b94b1774aeb833ef4a0c05e78887eb89259a143ceada\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 15:26:53.668851 containerd[1667]: time="2025-12-16T15:26:53.667654223Z" level=info msg="StartContainer for \"9ebf1802d337bcf1773c15cd1319826d77885eb6d97a4ec11cce863c65c10b2b\"" Dec 16 15:26:53.676416 containerd[1667]: time="2025-12-16T15:26:53.676351695Z" level=info msg="connecting to shim 9ebf1802d337bcf1773c15cd1319826d77885eb6d97a4ec11cce863c65c10b2b" address="unix:///run/containerd/s/e9b039510f958f0b2212313694ca8ea3e2e4b5c7002e2f17a9def0075025484a" protocol=ttrpc version=3 Dec 16 15:26:53.678082 containerd[1667]: time="2025-12-16T15:26:53.677984479Z" level=info msg="CreateContainer within sandbox \"29848eafe3fa4725e2426e2b48444840616a73c2e71941a9bc6b7a4939104c37\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dfc3ef590c28c6b9e4199e1dff927e4fe18b80f8f472e48b333b5c19210bf631\"" Dec 16 15:26:53.682246 containerd[1667]: time="2025-12-16T15:26:53.682181487Z" level=info msg="StartContainer for \"dfc3ef590c28c6b9e4199e1dff927e4fe18b80f8f472e48b333b5c19210bf631\"" Dec 16 15:26:53.686772 containerd[1667]: time="2025-12-16T15:26:53.686714599Z" level=info msg="Container 03156d85de0816e87e272ce0f8c855495deca31b241e9974305170b132a4c209: CDI devices from CRI Config.CDIDevices: []" Dec 16 15:26:53.687827 containerd[1667]: time="2025-12-16T15:26:53.687781092Z" level=info msg="connecting to shim dfc3ef590c28c6b9e4199e1dff927e4fe18b80f8f472e48b333b5c19210bf631" address="unix:///run/containerd/s/75df26f2074f6d2ca27d4101bc1c7f408a0b6bf1105ca5f63630029628bed66b" protocol=ttrpc version=3 Dec 16 15:26:53.703558 containerd[1667]: time="2025-12-16T15:26:53.703474598Z" level=info msg="CreateContainer within sandbox \"49983e6915991eec1347b94b1774aeb833ef4a0c05e78887eb89259a143ceada\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"03156d85de0816e87e272ce0f8c855495deca31b241e9974305170b132a4c209\"" Dec 16 15:26:53.705755 containerd[1667]: time="2025-12-16T15:26:53.705702136Z" level=info msg="StartContainer for \"03156d85de0816e87e272ce0f8c855495deca31b241e9974305170b132a4c209\"" Dec 16 15:26:53.712119 containerd[1667]: time="2025-12-16T15:26:53.711555091Z" level=info msg="connecting to shim 03156d85de0816e87e272ce0f8c855495deca31b241e9974305170b132a4c209" address="unix:///run/containerd/s/b6dff95e506fcc8b617d7a12b7284c13cc636b2d52eee2ce7937996ea3cfc0f0" protocol=ttrpc version=3 Dec 16 15:26:53.721203 systemd[1]: Started cri-containerd-9ebf1802d337bcf1773c15cd1319826d77885eb6d97a4ec11cce863c65c10b2b.scope - libcontainer container 9ebf1802d337bcf1773c15cd1319826d77885eb6d97a4ec11cce863c65c10b2b. Dec 16 15:26:53.739886 systemd[1]: Started cri-containerd-dfc3ef590c28c6b9e4199e1dff927e4fe18b80f8f472e48b333b5c19210bf631.scope - libcontainer container dfc3ef590c28c6b9e4199e1dff927e4fe18b80f8f472e48b333b5c19210bf631. Dec 16 15:26:53.766769 systemd[1]: Started cri-containerd-03156d85de0816e87e272ce0f8c855495deca31b241e9974305170b132a4c209.scope - libcontainer container 03156d85de0816e87e272ce0f8c855495deca31b241e9974305170b132a4c209. Dec 16 15:26:53.775000 audit: BPF prog-id=101 op=LOAD Dec 16 15:26:53.776000 audit: BPF prog-id=102 op=LOAD Dec 16 15:26:53.776000 audit[2781]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2677 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.776000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965626631383032643333376263663137373363313563643133313938 Dec 16 15:26:53.776000 audit: BPF prog-id=102 op=UNLOAD Dec 16 15:26:53.776000 audit[2781]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2677 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.776000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965626631383032643333376263663137373363313563643133313938 Dec 16 15:26:53.777000 audit: BPF prog-id=103 op=LOAD Dec 16 15:26:53.777000 audit[2781]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2677 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.777000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965626631383032643333376263663137373363313563643133313938 Dec 16 15:26:53.777000 audit: BPF prog-id=104 op=LOAD Dec 16 15:26:53.777000 audit[2781]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2677 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.777000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965626631383032643333376263663137373363313563643133313938 Dec 16 15:26:53.777000 audit: BPF prog-id=104 op=UNLOAD Dec 16 15:26:53.777000 audit[2781]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2677 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.777000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965626631383032643333376263663137373363313563643133313938 Dec 16 15:26:53.777000 audit: BPF prog-id=103 op=UNLOAD Dec 16 15:26:53.777000 audit[2781]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2677 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.777000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965626631383032643333376263663137373363313563643133313938 Dec 16 15:26:53.777000 audit: BPF prog-id=105 op=LOAD Dec 16 15:26:53.777000 audit[2781]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2677 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.777000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965626631383032643333376263663137373363313563643133313938 Dec 16 15:26:53.795000 audit: BPF prog-id=106 op=LOAD Dec 16 15:26:53.797000 audit: BPF prog-id=107 op=LOAD Dec 16 15:26:53.797000 audit[2787]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2665 pid=2787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466633365663539306332386336623965343139396531646666393237 Dec 16 15:26:53.798000 audit: BPF prog-id=107 op=UNLOAD Dec 16 15:26:53.798000 audit[2787]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2665 pid=2787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.798000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466633365663539306332386336623965343139396531646666393237 Dec 16 15:26:53.800000 audit: BPF prog-id=108 op=LOAD Dec 16 15:26:53.800000 audit[2787]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2665 pid=2787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466633365663539306332386336623965343139396531646666393237 Dec 16 15:26:53.800000 audit: BPF prog-id=109 op=LOAD Dec 16 15:26:53.800000 audit[2787]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2665 pid=2787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466633365663539306332386336623965343139396531646666393237 Dec 16 15:26:53.800000 audit: BPF prog-id=109 op=UNLOAD Dec 16 15:26:53.800000 audit[2787]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2665 pid=2787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466633365663539306332386336623965343139396531646666393237 Dec 16 15:26:53.801000 audit: BPF prog-id=108 op=UNLOAD Dec 16 15:26:53.801000 audit[2787]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2665 pid=2787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.801000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466633365663539306332386336623965343139396531646666393237 Dec 16 15:26:53.801000 audit: BPF prog-id=110 op=LOAD Dec 16 15:26:53.801000 audit[2787]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2665 pid=2787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.801000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466633365663539306332386336623965343139396531646666393237 Dec 16 15:26:53.845000 audit: BPF prog-id=111 op=LOAD Dec 16 15:26:53.848000 audit: BPF prog-id=112 op=LOAD Dec 16 15:26:53.848000 audit[2803]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2678 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033313536643835646530383136653837653237326365306638633835 Dec 16 15:26:53.850000 audit: BPF prog-id=112 op=UNLOAD Dec 16 15:26:53.850000 audit[2803]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2678 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033313536643835646530383136653837653237326365306638633835 Dec 16 15:26:53.850000 audit: BPF prog-id=113 op=LOAD Dec 16 15:26:53.850000 audit[2803]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2678 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033313536643835646530383136653837653237326365306638633835 Dec 16 15:26:53.851000 audit: BPF prog-id=114 op=LOAD Dec 16 15:26:53.851000 audit[2803]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2678 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.851000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033313536643835646530383136653837653237326365306638633835 Dec 16 15:26:53.851000 audit: BPF prog-id=114 op=UNLOAD Dec 16 15:26:53.851000 audit[2803]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2678 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.851000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033313536643835646530383136653837653237326365306638633835 Dec 16 15:26:53.851000 audit: BPF prog-id=113 op=UNLOAD Dec 16 15:26:53.851000 audit[2803]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2678 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.851000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033313536643835646530383136653837653237326365306638633835 Dec 16 15:26:53.851000 audit: BPF prog-id=115 op=LOAD Dec 16 15:26:53.851000 audit[2803]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2678 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:26:53.851000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033313536643835646530383136653837653237326365306638633835 Dec 16 15:26:53.882537 containerd[1667]: time="2025-12-16T15:26:53.881352364Z" level=info msg="StartContainer for \"9ebf1802d337bcf1773c15cd1319826d77885eb6d97a4ec11cce863c65c10b2b\" returns successfully" Dec 16 15:26:53.902979 containerd[1667]: time="2025-12-16T15:26:53.900799412Z" level=info msg="StartContainer for \"dfc3ef590c28c6b9e4199e1dff927e4fe18b80f8f472e48b333b5c19210bf631\" returns successfully" Dec 16 15:26:53.932932 containerd[1667]: time="2025-12-16T15:26:53.932878472Z" level=info msg="StartContainer for \"03156d85de0816e87e272ce0f8c855495deca31b241e9974305170b132a4c209\" returns successfully" Dec 16 15:26:54.675536 kubelet[2602]: E1216 15:26:54.674311 2602 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-g2i2t.gb1.brightbox.com\" not found" node="srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:54.683933 kubelet[2602]: E1216 15:26:54.683898 2602 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-g2i2t.gb1.brightbox.com\" not found" node="srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:54.688779 kubelet[2602]: E1216 15:26:54.688755 2602 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-g2i2t.gb1.brightbox.com\" not found" node="srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:54.862531 kubelet[2602]: I1216 15:26:54.862449 2602 kubelet_node_status.go:75] "Attempting to register node" node="srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:55.692365 kubelet[2602]: E1216 15:26:55.692300 2602 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-g2i2t.gb1.brightbox.com\" not found" node="srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:55.693910 kubelet[2602]: E1216 15:26:55.693367 2602 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-g2i2t.gb1.brightbox.com\" not found" node="srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:55.694172 kubelet[2602]: E1216 15:26:55.694149 2602 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-g2i2t.gb1.brightbox.com\" not found" node="srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:56.696272 kubelet[2602]: E1216 15:26:56.696231 2602 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-g2i2t.gb1.brightbox.com\" not found" node="srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:56.698537 kubelet[2602]: E1216 15:26:56.697278 2602 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-g2i2t.gb1.brightbox.com\" not found" node="srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:57.309460 systemd[1]: Started sshd@9-10.230.25.166:22-193.46.255.20:18903.service - OpenSSH per-connection server daemon (193.46.255.20:18903). Dec 16 15:26:57.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.25.166:22-193.46.255.20:18903 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:26:57.313593 kernel: kauditd_printk_skb: 206 callbacks suppressed Dec 16 15:26:57.313691 kernel: audit: type=1130 audit(1765898817.309:400): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.25.166:22-193.46.255.20:18903 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:26:57.562920 kubelet[2602]: E1216 15:26:57.562765 2602 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-g2i2t.gb1.brightbox.com\" not found" node="srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:57.611537 kubelet[2602]: I1216 15:26:57.611326 2602 kubelet_node_status.go:78] "Successfully registered node" node="srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:57.668535 kubelet[2602]: I1216 15:26:57.667216 2602 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:57.751550 kubelet[2602]: E1216 15:26:57.751486 2602 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-g2i2t.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:57.752641 kubelet[2602]: I1216 15:26:57.752218 2602 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:57.763849 kubelet[2602]: E1216 15:26:57.763779 2602 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-g2i2t.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:57.764310 kubelet[2602]: I1216 15:26:57.764082 2602 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:57.768783 kubelet[2602]: E1216 15:26:57.768714 2602 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-g2i2t.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-g2i2t.gb1.brightbox.com" Dec 16 15:26:57.785000 audit[2886]: USER_AUTH pid=2886 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/lib64/misc/sshd-session" hostname=193.46.255.20 addr=193.46.255.20 terminal=ssh res=failed' Dec 16 15:26:57.785401 sshd-session[2886]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.20 user=root Dec 16 15:26:57.795634 kernel: audit: type=1100 audit(1765898817.785:401): pid=2886 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/lib64/misc/sshd-session" hostname=193.46.255.20 addr=193.46.255.20 terminal=ssh res=failed' Dec 16 15:26:58.529370 kubelet[2602]: I1216 15:26:58.529257 2602 apiserver.go:52] "Watching apiserver" Dec 16 15:26:58.568404 kubelet[2602]: I1216 15:26:58.568347 2602 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 15:26:59.934468 systemd[1]: Reload requested from client PID 2892 ('systemctl') (unit session-11.scope)... Dec 16 15:26:59.934499 systemd[1]: Reloading... Dec 16 15:26:59.947949 sshd[2883]: PAM: Permission denied for root from 193.46.255.20 Dec 16 15:27:00.037431 sshd-session[2900]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.20 user=root Dec 16 15:27:00.037000 audit[2900]: USER_AUTH pid=2900 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/lib64/misc/sshd-session" hostname=193.46.255.20 addr=193.46.255.20 terminal=ssh res=failed' Dec 16 15:27:00.045581 kernel: audit: type=1100 audit(1765898820.037:402): pid=2900 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/lib64/misc/sshd-session" hostname=193.46.255.20 addr=193.46.255.20 terminal=ssh res=failed' Dec 16 15:27:00.083598 zram_generator::config[2939]: No configuration found. Dec 16 15:27:00.485981 systemd[1]: Reloading finished in 550 ms. Dec 16 15:27:00.529178 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 15:27:00.547323 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 15:27:00.548197 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 15:27:00.548611 systemd[1]: kubelet.service: Consumed 1.173s CPU time, 124.5M memory peak. Dec 16 15:27:00.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:27:00.556221 kernel: audit: type=1131 audit(1765898820.547:403): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:27:00.556229 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 15:27:00.555000 audit: BPF prog-id=116 op=LOAD Dec 16 15:27:00.564625 kernel: audit: type=1334 audit(1765898820.555:404): prog-id=116 op=LOAD Dec 16 15:27:00.564715 kernel: audit: type=1334 audit(1765898820.555:405): prog-id=72 op=UNLOAD Dec 16 15:27:00.555000 audit: BPF prog-id=72 op=UNLOAD Dec 16 15:27:00.566201 kernel: audit: type=1334 audit(1765898820.555:406): prog-id=117 op=LOAD Dec 16 15:27:00.555000 audit: BPF prog-id=117 op=LOAD Dec 16 15:27:00.567704 kernel: audit: type=1334 audit(1765898820.556:407): prog-id=118 op=LOAD Dec 16 15:27:00.556000 audit: BPF prog-id=118 op=LOAD Dec 16 15:27:00.569265 kernel: audit: type=1334 audit(1765898820.556:408): prog-id=73 op=UNLOAD Dec 16 15:27:00.556000 audit: BPF prog-id=73 op=UNLOAD Dec 16 15:27:00.570854 kernel: audit: type=1334 audit(1765898820.556:409): prog-id=74 op=UNLOAD Dec 16 15:27:00.556000 audit: BPF prog-id=74 op=UNLOAD Dec 16 15:27:00.559000 audit: BPF prog-id=119 op=LOAD Dec 16 15:27:00.559000 audit: BPF prog-id=75 op=UNLOAD Dec 16 15:27:00.569000 audit: BPF prog-id=120 op=LOAD Dec 16 15:27:00.569000 audit: BPF prog-id=121 op=LOAD Dec 16 15:27:00.569000 audit: BPF prog-id=76 op=UNLOAD Dec 16 15:27:00.569000 audit: BPF prog-id=77 op=UNLOAD Dec 16 15:27:00.575000 audit: BPF prog-id=122 op=LOAD Dec 16 15:27:00.575000 audit: BPF prog-id=65 op=UNLOAD Dec 16 15:27:00.578000 audit: BPF prog-id=123 op=LOAD Dec 16 15:27:00.578000 audit: BPF prog-id=69 op=UNLOAD Dec 16 15:27:00.578000 audit: BPF prog-id=124 op=LOAD Dec 16 15:27:00.578000 audit: BPF prog-id=125 op=LOAD Dec 16 15:27:00.578000 audit: BPF prog-id=70 op=UNLOAD Dec 16 15:27:00.578000 audit: BPF prog-id=71 op=UNLOAD Dec 16 15:27:00.579000 audit: BPF prog-id=126 op=LOAD Dec 16 15:27:00.583000 audit: BPF prog-id=82 op=UNLOAD Dec 16 15:27:00.584000 audit: BPF prog-id=127 op=LOAD Dec 16 15:27:00.584000 audit: BPF prog-id=78 op=UNLOAD Dec 16 15:27:00.587000 audit: BPF prog-id=128 op=LOAD Dec 16 15:27:00.587000 audit: BPF prog-id=79 op=UNLOAD Dec 16 15:27:00.587000 audit: BPF prog-id=129 op=LOAD Dec 16 15:27:00.587000 audit: BPF prog-id=130 op=LOAD Dec 16 15:27:00.587000 audit: BPF prog-id=80 op=UNLOAD Dec 16 15:27:00.587000 audit: BPF prog-id=81 op=UNLOAD Dec 16 15:27:00.589000 audit: BPF prog-id=131 op=LOAD Dec 16 15:27:00.589000 audit: BPF prog-id=85 op=UNLOAD Dec 16 15:27:00.591000 audit: BPF prog-id=132 op=LOAD Dec 16 15:27:00.591000 audit: BPF prog-id=133 op=LOAD Dec 16 15:27:00.591000 audit: BPF prog-id=83 op=UNLOAD Dec 16 15:27:00.591000 audit: BPF prog-id=84 op=UNLOAD Dec 16 15:27:00.592000 audit: BPF prog-id=134 op=LOAD Dec 16 15:27:00.592000 audit: BPF prog-id=66 op=UNLOAD Dec 16 15:27:00.592000 audit: BPF prog-id=135 op=LOAD Dec 16 15:27:00.592000 audit: BPF prog-id=136 op=LOAD Dec 16 15:27:00.592000 audit: BPF prog-id=67 op=UNLOAD Dec 16 15:27:00.592000 audit: BPF prog-id=68 op=UNLOAD Dec 16 15:27:00.957883 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 15:27:00.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:27:00.976011 (kubelet)[3007]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 15:27:01.082216 kubelet[3007]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 15:27:01.082216 kubelet[3007]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 15:27:01.082775 kubelet[3007]: I1216 15:27:01.082245 3007 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 15:27:01.107973 kubelet[3007]: I1216 15:27:01.107299 3007 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 15:27:01.107973 kubelet[3007]: I1216 15:27:01.107344 3007 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 15:27:01.107973 kubelet[3007]: I1216 15:27:01.107388 3007 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 15:27:01.107973 kubelet[3007]: I1216 15:27:01.107401 3007 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 15:27:01.109570 kubelet[3007]: I1216 15:27:01.109018 3007 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 15:27:01.113496 kubelet[3007]: I1216 15:27:01.113435 3007 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 15:27:01.130993 kubelet[3007]: I1216 15:27:01.130379 3007 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 15:27:01.156844 kubelet[3007]: I1216 15:27:01.156719 3007 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 15:27:01.171448 kubelet[3007]: I1216 15:27:01.171329 3007 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 15:27:01.178006 kubelet[3007]: I1216 15:27:01.176869 3007 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 15:27:01.178006 kubelet[3007]: I1216 15:27:01.176940 3007 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-g2i2t.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 15:27:01.178006 kubelet[3007]: I1216 15:27:01.177258 3007 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 15:27:01.178006 kubelet[3007]: I1216 15:27:01.177274 3007 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 15:27:01.178926 kubelet[3007]: I1216 15:27:01.177319 3007 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 15:27:01.181338 kubelet[3007]: I1216 15:27:01.181312 3007 state_mem.go:36] "Initialized new in-memory state store" Dec 16 15:27:01.182041 kubelet[3007]: I1216 15:27:01.181989 3007 kubelet.go:475] "Attempting to sync node with API server" Dec 16 15:27:01.182457 kubelet[3007]: I1216 15:27:01.182394 3007 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 15:27:01.182724 kubelet[3007]: I1216 15:27:01.182641 3007 kubelet.go:387] "Adding apiserver pod source" Dec 16 15:27:01.183137 kubelet[3007]: I1216 15:27:01.182881 3007 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 15:27:01.187789 kubelet[3007]: I1216 15:27:01.187134 3007 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 15:27:01.189646 kubelet[3007]: I1216 15:27:01.189556 3007 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 15:27:01.190257 kubelet[3007]: I1216 15:27:01.190236 3007 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 15:27:01.218024 kubelet[3007]: I1216 15:27:01.217895 3007 server.go:1262] "Started kubelet" Dec 16 15:27:01.220947 kubelet[3007]: I1216 15:27:01.220378 3007 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 15:27:01.220947 kubelet[3007]: I1216 15:27:01.220441 3007 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 15:27:01.225637 kubelet[3007]: I1216 15:27:01.224119 3007 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 15:27:01.226632 kubelet[3007]: I1216 15:27:01.226592 3007 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 15:27:01.237284 kubelet[3007]: I1216 15:27:01.235397 3007 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 15:27:01.242398 kubelet[3007]: I1216 15:27:01.242346 3007 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 15:27:01.246348 kubelet[3007]: E1216 15:27:01.243895 3007 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"srv-g2i2t.gb1.brightbox.com\" not found" Dec 16 15:27:01.246348 kubelet[3007]: I1216 15:27:01.244289 3007 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 15:27:01.246348 kubelet[3007]: I1216 15:27:01.244743 3007 server.go:310] "Adding debug handlers to kubelet server" Dec 16 15:27:01.246348 kubelet[3007]: I1216 15:27:01.246130 3007 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 15:27:01.246348 kubelet[3007]: I1216 15:27:01.246326 3007 reconciler.go:29] "Reconciler: start to sync state" Dec 16 15:27:01.277609 kubelet[3007]: I1216 15:27:01.276824 3007 factory.go:223] Registration of the systemd container factory successfully Dec 16 15:27:01.277609 kubelet[3007]: I1216 15:27:01.276957 3007 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 15:27:01.280541 kubelet[3007]: I1216 15:27:01.280200 3007 factory.go:223] Registration of the containerd container factory successfully Dec 16 15:27:01.325378 kubelet[3007]: I1216 15:27:01.325101 3007 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 15:27:01.331917 kubelet[3007]: I1216 15:27:01.331878 3007 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 15:27:01.332130 kubelet[3007]: I1216 15:27:01.332098 3007 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 15:27:01.333633 kubelet[3007]: I1216 15:27:01.333610 3007 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 15:27:01.333816 kubelet[3007]: E1216 15:27:01.333784 3007 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 15:27:01.421239 kubelet[3007]: I1216 15:27:01.421199 3007 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 15:27:01.422813 kubelet[3007]: I1216 15:27:01.422779 3007 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 15:27:01.422894 kubelet[3007]: I1216 15:27:01.422867 3007 state_mem.go:36] "Initialized new in-memory state store" Dec 16 15:27:01.426182 kubelet[3007]: I1216 15:27:01.424274 3007 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 15:27:01.426182 kubelet[3007]: I1216 15:27:01.425821 3007 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 15:27:01.426182 kubelet[3007]: I1216 15:27:01.425881 3007 policy_none.go:49] "None policy: Start" Dec 16 15:27:01.426182 kubelet[3007]: I1216 15:27:01.425927 3007 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 15:27:01.426182 kubelet[3007]: I1216 15:27:01.425958 3007 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 15:27:01.427227 kubelet[3007]: I1216 15:27:01.426980 3007 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 16 15:27:01.427227 kubelet[3007]: I1216 15:27:01.427007 3007 policy_none.go:47] "Start" Dec 16 15:27:01.436300 kubelet[3007]: E1216 15:27:01.436262 3007 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 15:27:01.438735 kubelet[3007]: E1216 15:27:01.438206 3007 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 15:27:01.439389 kubelet[3007]: I1216 15:27:01.439366 3007 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 15:27:01.439554 kubelet[3007]: I1216 15:27:01.439492 3007 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 15:27:01.442480 kubelet[3007]: I1216 15:27:01.442046 3007 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 15:27:01.447874 kubelet[3007]: E1216 15:27:01.447805 3007 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 15:27:01.571356 kubelet[3007]: I1216 15:27:01.571301 3007 kubelet_node_status.go:75] "Attempting to register node" node="srv-g2i2t.gb1.brightbox.com" Dec 16 15:27:01.592150 kubelet[3007]: I1216 15:27:01.591740 3007 kubelet_node_status.go:124] "Node was previously registered" node="srv-g2i2t.gb1.brightbox.com" Dec 16 15:27:01.592150 kubelet[3007]: I1216 15:27:01.591869 3007 kubelet_node_status.go:78] "Successfully registered node" node="srv-g2i2t.gb1.brightbox.com" Dec 16 15:27:01.638691 kubelet[3007]: I1216 15:27:01.638320 3007 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-g2i2t.gb1.brightbox.com" Dec 16 15:27:01.638892 kubelet[3007]: I1216 15:27:01.638721 3007 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-g2i2t.gb1.brightbox.com" Dec 16 15:27:01.640475 kubelet[3007]: I1216 15:27:01.640191 3007 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-g2i2t.gb1.brightbox.com" Dec 16 15:27:01.649163 kubelet[3007]: I1216 15:27:01.648234 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2b48045d36a0aab2c6190a017d8aee74-ca-certs\") pod \"kube-apiserver-srv-g2i2t.gb1.brightbox.com\" (UID: \"2b48045d36a0aab2c6190a017d8aee74\") " pod="kube-system/kube-apiserver-srv-g2i2t.gb1.brightbox.com" Dec 16 15:27:01.649163 kubelet[3007]: I1216 15:27:01.648284 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2b48045d36a0aab2c6190a017d8aee74-k8s-certs\") pod \"kube-apiserver-srv-g2i2t.gb1.brightbox.com\" (UID: \"2b48045d36a0aab2c6190a017d8aee74\") " pod="kube-system/kube-apiserver-srv-g2i2t.gb1.brightbox.com" Dec 16 15:27:01.649163 kubelet[3007]: I1216 15:27:01.648320 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2b48045d36a0aab2c6190a017d8aee74-usr-share-ca-certificates\") pod \"kube-apiserver-srv-g2i2t.gb1.brightbox.com\" (UID: \"2b48045d36a0aab2c6190a017d8aee74\") " pod="kube-system/kube-apiserver-srv-g2i2t.gb1.brightbox.com" Dec 16 15:27:01.649163 kubelet[3007]: I1216 15:27:01.648351 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e69ce0004b5ae706396b4ed4823bccb1-ca-certs\") pod \"kube-controller-manager-srv-g2i2t.gb1.brightbox.com\" (UID: \"e69ce0004b5ae706396b4ed4823bccb1\") " pod="kube-system/kube-controller-manager-srv-g2i2t.gb1.brightbox.com" Dec 16 15:27:01.649163 kubelet[3007]: I1216 15:27:01.648381 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e69ce0004b5ae706396b4ed4823bccb1-flexvolume-dir\") pod \"kube-controller-manager-srv-g2i2t.gb1.brightbox.com\" (UID: \"e69ce0004b5ae706396b4ed4823bccb1\") " pod="kube-system/kube-controller-manager-srv-g2i2t.gb1.brightbox.com" Dec 16 15:27:01.649496 kubelet[3007]: I1216 15:27:01.648409 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e10106eb42f2b5319f7b2ab23688d25a-kubeconfig\") pod \"kube-scheduler-srv-g2i2t.gb1.brightbox.com\" (UID: \"e10106eb42f2b5319f7b2ab23688d25a\") " pod="kube-system/kube-scheduler-srv-g2i2t.gb1.brightbox.com" Dec 16 15:27:01.649496 kubelet[3007]: I1216 15:27:01.648438 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e69ce0004b5ae706396b4ed4823bccb1-k8s-certs\") pod \"kube-controller-manager-srv-g2i2t.gb1.brightbox.com\" (UID: \"e69ce0004b5ae706396b4ed4823bccb1\") " pod="kube-system/kube-controller-manager-srv-g2i2t.gb1.brightbox.com" Dec 16 15:27:01.649496 kubelet[3007]: I1216 15:27:01.648464 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e69ce0004b5ae706396b4ed4823bccb1-kubeconfig\") pod \"kube-controller-manager-srv-g2i2t.gb1.brightbox.com\" (UID: \"e69ce0004b5ae706396b4ed4823bccb1\") " pod="kube-system/kube-controller-manager-srv-g2i2t.gb1.brightbox.com" Dec 16 15:27:01.649496 kubelet[3007]: I1216 15:27:01.649229 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e69ce0004b5ae706396b4ed4823bccb1-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-g2i2t.gb1.brightbox.com\" (UID: \"e69ce0004b5ae706396b4ed4823bccb1\") " pod="kube-system/kube-controller-manager-srv-g2i2t.gb1.brightbox.com" Dec 16 15:27:01.654264 kubelet[3007]: I1216 15:27:01.654001 3007 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 15:27:01.655992 kubelet[3007]: I1216 15:27:01.655965 3007 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 15:27:01.656274 kubelet[3007]: I1216 15:27:01.656239 3007 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 15:27:02.144909 sshd[2883]: PAM: Permission denied for root from 193.46.255.20 Dec 16 15:27:02.201559 kubelet[3007]: I1216 15:27:02.201493 3007 apiserver.go:52] "Watching apiserver" Dec 16 15:27:02.240301 sshd-session[3053]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.20 user=root Dec 16 15:27:02.240000 audit[3053]: USER_AUTH pid=3053 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/lib64/misc/sshd-session" hostname=193.46.255.20 addr=193.46.255.20 terminal=ssh res=failed' Dec 16 15:27:02.246877 kubelet[3007]: I1216 15:27:02.246627 3007 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 15:27:02.383416 kubelet[3007]: I1216 15:27:02.383340 3007 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-g2i2t.gb1.brightbox.com" Dec 16 15:27:02.389878 kubelet[3007]: I1216 15:27:02.389795 3007 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 15:27:02.390100 kubelet[3007]: E1216 15:27:02.389926 3007 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-g2i2t.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-g2i2t.gb1.brightbox.com" Dec 16 15:27:02.466182 kubelet[3007]: I1216 15:27:02.466001 3007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-g2i2t.gb1.brightbox.com" podStartSLOduration=1.465969791 podStartE2EDuration="1.465969791s" podCreationTimestamp="2025-12-16 15:27:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 15:27:02.456069257 +0000 UTC m=+1.466004406" watchObservedRunningTime="2025-12-16 15:27:02.465969791 +0000 UTC m=+1.475904907" Dec 16 15:27:02.522686 kubelet[3007]: I1216 15:27:02.522596 3007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-g2i2t.gb1.brightbox.com" podStartSLOduration=1.522570749 podStartE2EDuration="1.522570749s" podCreationTimestamp="2025-12-16 15:27:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 15:27:02.49766141 +0000 UTC m=+1.507596563" watchObservedRunningTime="2025-12-16 15:27:02.522570749 +0000 UTC m=+1.532505878" Dec 16 15:27:02.538194 kubelet[3007]: I1216 15:27:02.537877 3007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-g2i2t.gb1.brightbox.com" podStartSLOduration=1.537837734 podStartE2EDuration="1.537837734s" podCreationTimestamp="2025-12-16 15:27:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 15:27:02.522921931 +0000 UTC m=+1.532857067" watchObservedRunningTime="2025-12-16 15:27:02.537837734 +0000 UTC m=+1.547772863" Dec 16 15:27:04.087928 sshd[2883]: PAM: Permission denied for root from 193.46.255.20 Dec 16 15:27:04.131285 sshd[2883]: Received disconnect from 193.46.255.20 port 18903:11: [preauth] Dec 16 15:27:04.131285 sshd[2883]: Disconnected from authenticating user root 193.46.255.20 port 18903 [preauth] Dec 16 15:27:04.141987 kernel: kauditd_printk_skb: 38 callbacks suppressed Dec 16 15:27:04.142261 kernel: audit: type=1109 audit(1765898824.131:448): pid=2883 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=193.46.255.20 addr=193.46.255.20 terminal=ssh res=failed' Dec 16 15:27:04.131000 audit[2883]: USER_ERR pid=2883 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=193.46.255.20 addr=193.46.255.20 terminal=ssh res=failed' Dec 16 15:27:04.139773 systemd[1]: sshd@9-10.230.25.166:22-193.46.255.20:18903.service: Deactivated successfully. Dec 16 15:27:04.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.25.166:22-193.46.255.20:18903 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:27:04.148576 kernel: audit: type=1131 audit(1765898824.139:449): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.25.166:22-193.46.255.20:18903 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:27:04.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.25.166:22-193.46.255.20:43850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:27:04.189251 systemd[1]: Started sshd@10-10.230.25.166:22-193.46.255.20:43850.service - OpenSSH per-connection server daemon (193.46.255.20:43850). Dec 16 15:27:04.195664 kernel: audit: type=1130 audit(1765898824.188:450): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.25.166:22-193.46.255.20:43850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:27:04.561151 sshd-session[3065]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.20 user=root Dec 16 15:27:04.561000 audit[3065]: USER_AUTH pid=3065 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/lib64/misc/sshd-session" hostname=193.46.255.20 addr=193.46.255.20 terminal=ssh res=failed' Dec 16 15:27:04.567583 kernel: audit: type=1100 audit(1765898824.561:451): pid=3065 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/lib64/misc/sshd-session" hostname=193.46.255.20 addr=193.46.255.20 terminal=ssh res=failed' Dec 16 15:27:06.036921 kubelet[3007]: I1216 15:27:06.036507 3007 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 15:27:06.039532 kubelet[3007]: I1216 15:27:06.038311 3007 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 15:27:06.039616 containerd[1667]: time="2025-12-16T15:27:06.038025730Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 15:27:06.349057 sshd[3062]: PAM: Permission denied for root from 193.46.255.20 Dec 16 15:27:06.429039 sshd-session[3066]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.20 user=root Dec 16 15:27:06.428000 audit[3066]: ANOM_LOGIN_FAILURES pid=3066 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='pam_faillock uid=0 exe="/usr/lib64/misc/sshd-session" hostname=? addr=? terminal=? res=success' Dec 16 15:27:06.428000 audit[3066]: USER_AUTH pid=3066 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/lib64/misc/sshd-session" hostname=193.46.255.20 addr=193.46.255.20 terminal=ssh res=failed' Dec 16 15:27:06.437710 kernel: audit: type=2100 audit(1765898826.428:452): pid=3066 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='pam_faillock uid=0 exe="/usr/lib64/misc/sshd-session" hostname=? addr=? terminal=? res=success' Dec 16 15:27:06.437852 kernel: audit: type=1100 audit(1765898826.428:453): pid=3066 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/lib64/misc/sshd-session" hostname=193.46.255.20 addr=193.46.255.20 terminal=ssh res=failed' Dec 16 15:27:06.615569 systemd[1]: Created slice kubepods-besteffort-podae05da78_c70d_490f_ad26_d6b2d5c9e294.slice - libcontainer container kubepods-besteffort-podae05da78_c70d_490f_ad26_d6b2d5c9e294.slice. Dec 16 15:27:06.682817 kubelet[3007]: I1216 15:27:06.682746 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ae05da78-c70d-490f-ad26-d6b2d5c9e294-kube-proxy\") pod \"kube-proxy-9bhs2\" (UID: \"ae05da78-c70d-490f-ad26-d6b2d5c9e294\") " pod="kube-system/kube-proxy-9bhs2" Dec 16 15:27:06.682817 kubelet[3007]: I1216 15:27:06.682805 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae05da78-c70d-490f-ad26-d6b2d5c9e294-xtables-lock\") pod \"kube-proxy-9bhs2\" (UID: \"ae05da78-c70d-490f-ad26-d6b2d5c9e294\") " pod="kube-system/kube-proxy-9bhs2" Dec 16 15:27:06.682817 kubelet[3007]: I1216 15:27:06.682834 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae05da78-c70d-490f-ad26-d6b2d5c9e294-lib-modules\") pod \"kube-proxy-9bhs2\" (UID: \"ae05da78-c70d-490f-ad26-d6b2d5c9e294\") " pod="kube-system/kube-proxy-9bhs2" Dec 16 15:27:06.683212 kubelet[3007]: I1216 15:27:06.682864 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm68l\" (UniqueName: \"kubernetes.io/projected/ae05da78-c70d-490f-ad26-d6b2d5c9e294-kube-api-access-xm68l\") pod \"kube-proxy-9bhs2\" (UID: \"ae05da78-c70d-490f-ad26-d6b2d5c9e294\") " pod="kube-system/kube-proxy-9bhs2" Dec 16 15:27:06.934898 containerd[1667]: time="2025-12-16T15:27:06.934413288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9bhs2,Uid:ae05da78-c70d-490f-ad26-d6b2d5c9e294,Namespace:kube-system,Attempt:0,}" Dec 16 15:27:06.962270 containerd[1667]: time="2025-12-16T15:27:06.962130642Z" level=info msg="connecting to shim 4d34946068628b82344b505237bcb517afa3da0485e22eee10e41bacd1670344" address="unix:///run/containerd/s/53f4513373b71300721718163fd906f9feb20213073b0a6856e7955a5d58f3be" namespace=k8s.io protocol=ttrpc version=3 Dec 16 15:27:07.029937 systemd[1]: Started cri-containerd-4d34946068628b82344b505237bcb517afa3da0485e22eee10e41bacd1670344.scope - libcontainer container 4d34946068628b82344b505237bcb517afa3da0485e22eee10e41bacd1670344. Dec 16 15:27:07.071000 audit: BPF prog-id=137 op=LOAD Dec 16 15:27:07.078620 kernel: audit: type=1334 audit(1765898827.071:454): prog-id=137 op=LOAD Dec 16 15:27:07.079000 audit: BPF prog-id=138 op=LOAD Dec 16 15:27:07.089481 kernel: audit: type=1334 audit(1765898827.079:455): prog-id=138 op=LOAD Dec 16 15:27:07.089591 kernel: audit: type=1300 audit(1765898827.079:455): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3075 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:07.079000 audit[3087]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3075 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:07.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464333439343630363836323862383233343462353035323337626362 Dec 16 15:27:07.098968 kernel: audit: type=1327 audit(1765898827.079:455): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464333439343630363836323862383233343462353035323337626362 Dec 16 15:27:07.081000 audit: BPF prog-id=138 op=UNLOAD Dec 16 15:27:07.081000 audit[3087]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3075 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:07.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464333439343630363836323862383233343462353035323337626362 Dec 16 15:27:07.081000 audit: BPF prog-id=139 op=LOAD Dec 16 15:27:07.081000 audit[3087]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3075 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:07.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464333439343630363836323862383233343462353035323337626362 Dec 16 15:27:07.081000 audit: BPF prog-id=140 op=LOAD Dec 16 15:27:07.081000 audit[3087]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3075 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:07.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464333439343630363836323862383233343462353035323337626362 Dec 16 15:27:07.081000 audit: BPF prog-id=140 op=UNLOAD Dec 16 15:27:07.081000 audit[3087]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3075 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:07.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464333439343630363836323862383233343462353035323337626362 Dec 16 15:27:07.081000 audit: BPF prog-id=139 op=UNLOAD Dec 16 15:27:07.081000 audit[3087]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3075 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:07.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464333439343630363836323862383233343462353035323337626362 Dec 16 15:27:07.081000 audit: BPF prog-id=141 op=LOAD Dec 16 15:27:07.081000 audit[3087]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3075 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:07.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464333439343630363836323862383233343462353035323337626362 Dec 16 15:27:07.134966 containerd[1667]: time="2025-12-16T15:27:07.134869143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9bhs2,Uid:ae05da78-c70d-490f-ad26-d6b2d5c9e294,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d34946068628b82344b505237bcb517afa3da0485e22eee10e41bacd1670344\"" Dec 16 15:27:07.144014 containerd[1667]: time="2025-12-16T15:27:07.143874891Z" level=info msg="CreateContainer within sandbox \"4d34946068628b82344b505237bcb517afa3da0485e22eee10e41bacd1670344\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 15:27:07.161427 containerd[1667]: time="2025-12-16T15:27:07.161353851Z" level=info msg="Container 43ece5c7fe0b5570b4234bc9babbf545795719ec58a89af8177626761091bbf3: CDI devices from CRI Config.CDIDevices: []" Dec 16 15:27:07.169385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4029843862.mount: Deactivated successfully. Dec 16 15:27:07.183276 containerd[1667]: time="2025-12-16T15:27:07.183214905Z" level=info msg="CreateContainer within sandbox \"4d34946068628b82344b505237bcb517afa3da0485e22eee10e41bacd1670344\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"43ece5c7fe0b5570b4234bc9babbf545795719ec58a89af8177626761091bbf3\"" Dec 16 15:27:07.184433 containerd[1667]: time="2025-12-16T15:27:07.184324995Z" level=info msg="StartContainer for \"43ece5c7fe0b5570b4234bc9babbf545795719ec58a89af8177626761091bbf3\"" Dec 16 15:27:07.188274 containerd[1667]: time="2025-12-16T15:27:07.187947207Z" level=info msg="connecting to shim 43ece5c7fe0b5570b4234bc9babbf545795719ec58a89af8177626761091bbf3" address="unix:///run/containerd/s/53f4513373b71300721718163fd906f9feb20213073b0a6856e7955a5d58f3be" protocol=ttrpc version=3 Dec 16 15:27:07.224899 systemd[1]: Started cri-containerd-43ece5c7fe0b5570b4234bc9babbf545795719ec58a89af8177626761091bbf3.scope - libcontainer container 43ece5c7fe0b5570b4234bc9babbf545795719ec58a89af8177626761091bbf3. Dec 16 15:27:07.342502 systemd[1]: Created slice kubepods-besteffort-podab04d9b2_485b_4e41_a67d_1a0ba092262f.slice - libcontainer container kubepods-besteffort-podab04d9b2_485b_4e41_a67d_1a0ba092262f.slice. Dec 16 15:27:07.354000 audit: BPF prog-id=142 op=LOAD Dec 16 15:27:07.354000 audit[3114]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3075 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:07.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433656365356337666530623535373062343233346263396261626266 Dec 16 15:27:07.354000 audit: BPF prog-id=143 op=LOAD Dec 16 15:27:07.354000 audit[3114]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3075 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:07.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433656365356337666530623535373062343233346263396261626266 Dec 16 15:27:07.354000 audit: BPF prog-id=143 op=UNLOAD Dec 16 15:27:07.354000 audit[3114]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3075 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:07.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433656365356337666530623535373062343233346263396261626266 Dec 16 15:27:07.354000 audit: BPF prog-id=142 op=UNLOAD Dec 16 15:27:07.354000 audit[3114]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3075 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:07.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433656365356337666530623535373062343233346263396261626266 Dec 16 15:27:07.354000 audit: BPF prog-id=144 op=LOAD Dec 16 15:27:07.354000 audit[3114]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3075 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:07.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433656365356337666530623535373062343233346263396261626266 Dec 16 15:27:07.391082 containerd[1667]: time="2025-12-16T15:27:07.390994159Z" level=info msg="StartContainer for \"43ece5c7fe0b5570b4234bc9babbf545795719ec58a89af8177626761091bbf3\" returns successfully" Dec 16 15:27:07.391715 kubelet[3007]: I1216 15:27:07.391670 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ab04d9b2-485b-4e41-a67d-1a0ba092262f-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-5nctp\" (UID: \"ab04d9b2-485b-4e41-a67d-1a0ba092262f\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-5nctp" Dec 16 15:27:07.392479 kubelet[3007]: I1216 15:27:07.392044 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whd5l\" (UniqueName: \"kubernetes.io/projected/ab04d9b2-485b-4e41-a67d-1a0ba092262f-kube-api-access-whd5l\") pod \"tigera-operator-65cdcdfd6d-5nctp\" (UID: \"ab04d9b2-485b-4e41-a67d-1a0ba092262f\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-5nctp" Dec 16 15:27:07.654850 containerd[1667]: time="2025-12-16T15:27:07.654774158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-5nctp,Uid:ab04d9b2-485b-4e41-a67d-1a0ba092262f,Namespace:tigera-operator,Attempt:0,}" Dec 16 15:27:07.700069 containerd[1667]: time="2025-12-16T15:27:07.699934662Z" level=info msg="connecting to shim f302fc5ecb0ec20e90f30658b8c0d6d626c6dac0b1f4cd2ad16d9c069729abd0" address="unix:///run/containerd/s/021034904f849c33640dd23ce04f5d50836f6136f60b2a322a6efd4154ce164a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 15:27:07.743963 systemd[1]: Started cri-containerd-f302fc5ecb0ec20e90f30658b8c0d6d626c6dac0b1f4cd2ad16d9c069729abd0.scope - libcontainer container f302fc5ecb0ec20e90f30658b8c0d6d626c6dac0b1f4cd2ad16d9c069729abd0. Dec 16 15:27:07.772000 audit: BPF prog-id=145 op=LOAD Dec 16 15:27:07.775000 audit: BPF prog-id=146 op=LOAD Dec 16 15:27:07.775000 audit[3167]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3156 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:07.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633303266633565636230656332306539306633303635386238633064 Dec 16 15:27:07.775000 audit: BPF prog-id=146 op=UNLOAD Dec 16 15:27:07.775000 audit[3167]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3156 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:07.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633303266633565636230656332306539306633303635386238633064 Dec 16 15:27:07.775000 audit: BPF prog-id=147 op=LOAD Dec 16 15:27:07.775000 audit[3167]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3156 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:07.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633303266633565636230656332306539306633303635386238633064 Dec 16 15:27:07.775000 audit: BPF prog-id=148 op=LOAD Dec 16 15:27:07.775000 audit[3167]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3156 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:07.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633303266633565636230656332306539306633303635386238633064 Dec 16 15:27:07.775000 audit: BPF prog-id=148 op=UNLOAD Dec 16 15:27:07.775000 audit[3167]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3156 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:07.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633303266633565636230656332306539306633303635386238633064 Dec 16 15:27:07.775000 audit: BPF prog-id=147 op=UNLOAD Dec 16 15:27:07.775000 audit[3167]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3156 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:07.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633303266633565636230656332306539306633303635386238633064 Dec 16 15:27:07.775000 audit: BPF prog-id=149 op=LOAD Dec 16 15:27:07.775000 audit[3167]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3156 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:07.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633303266633565636230656332306539306633303635386238633064 Dec 16 15:27:07.862935 containerd[1667]: time="2025-12-16T15:27:07.862872972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-5nctp,Uid:ab04d9b2-485b-4e41-a67d-1a0ba092262f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f302fc5ecb0ec20e90f30658b8c0d6d626c6dac0b1f4cd2ad16d9c069729abd0\"" Dec 16 15:27:07.867044 containerd[1667]: time="2025-12-16T15:27:07.866698341Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 15:27:08.005000 audit[3225]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3225 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:27:08.006000 audit[3224]: NETFILTER_CFG table=mangle:55 family=2 entries=1 op=nft_register_chain pid=3224 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:27:08.006000 audit[3224]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffd2b75c30 a2=0 a3=7fffd2b75c1c items=0 ppid=3127 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.006000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 15:27:08.005000 audit[3225]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff03bd9170 a2=0 a3=7fff03bd915c items=0 ppid=3127 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.005000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 15:27:08.009000 audit[3227]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=3227 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:27:08.009000 audit[3226]: NETFILTER_CFG table=nat:57 family=2 entries=1 op=nft_register_chain pid=3226 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:27:08.009000 audit[3227]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe90e7c160 a2=0 a3=7ffe90e7c14c items=0 ppid=3127 pid=3227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.009000 audit[3226]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffbb31b580 a2=0 a3=7fffbb31b56c items=0 ppid=3127 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.009000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 15:27:08.009000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 15:27:08.011000 audit[3229]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=3229 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:27:08.011000 audit[3229]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff29be74e0 a2=0 a3=7fff29be74cc items=0 ppid=3127 pid=3229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.011000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 15:27:08.012000 audit[3230]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3230 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:27:08.012000 audit[3230]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffe53a32a0 a2=0 a3=7fffe53a328c items=0 ppid=3127 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.012000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 15:27:08.118000 audit[3233]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3233 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:27:08.118000 audit[3233]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd063a1100 a2=0 a3=7ffd063a10ec items=0 ppid=3127 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.118000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 15:27:08.124000 audit[3235]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3235 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:27:08.124000 audit[3235]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd25ddae60 a2=0 a3=7ffd25ddae4c items=0 ppid=3127 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.124000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Dec 16 15:27:08.130000 audit[3238]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3238 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:27:08.130000 audit[3238]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffeda8867c0 a2=0 a3=7ffeda8867ac items=0 ppid=3127 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.130000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Dec 16 15:27:08.133000 audit[3239]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3239 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:27:08.133000 audit[3239]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff3382a780 a2=0 a3=7fff3382a76c items=0 ppid=3127 pid=3239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.133000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 15:27:08.137000 audit[3241]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3241 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:27:08.137000 audit[3241]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff5035cb30 a2=0 a3=7fff5035cb1c items=0 ppid=3127 pid=3241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.137000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 15:27:08.139000 audit[3242]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3242 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:27:08.139000 audit[3242]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc76de2230 a2=0 a3=7ffc76de221c items=0 ppid=3127 pid=3242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.139000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 15:27:08.143000 audit[3244]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3244 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:27:08.143000 audit[3244]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdb7cb44c0 a2=0 a3=7ffdb7cb44ac items=0 ppid=3127 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.143000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 15:27:08.150000 audit[3247]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3247 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:27:08.150000 audit[3247]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffcd269f20 a2=0 a3=7fffcd269f0c items=0 ppid=3127 pid=3247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.150000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 15:27:08.153000 audit[3248]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3248 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:27:08.153000 audit[3248]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe44689970 a2=0 a3=7ffe4468995c items=0 ppid=3127 pid=3248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.153000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 15:27:08.159000 audit[3250]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3250 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:27:08.159000 audit[3250]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe938dfb50 a2=0 a3=7ffe938dfb3c items=0 ppid=3127 pid=3250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.159000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 15:27:08.161000 audit[3251]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3251 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:27:08.161000 audit[3251]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff814fdcb0 a2=0 a3=7fff814fdc9c items=0 ppid=3127 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.161000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 15:27:08.165000 audit[3253]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3253 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:27:08.165000 audit[3253]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff2e73c590 a2=0 a3=7fff2e73c57c items=0 ppid=3127 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.165000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Dec 16 15:27:08.171000 audit[3256]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3256 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:27:08.171000 audit[3256]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd1ef12440 a2=0 a3=7ffd1ef1242c items=0 ppid=3127 pid=3256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.171000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Dec 16 15:27:08.177000 audit[3259]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3259 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:27:08.177000 audit[3259]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff200db0d0 a2=0 a3=7fff200db0bc items=0 ppid=3127 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.177000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Dec 16 15:27:08.179000 audit[3260]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3260 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:27:08.179000 audit[3260]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffa7104ac0 a2=0 a3=7fffa7104aac items=0 ppid=3127 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.179000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 15:27:08.187000 audit[3262]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3262 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:27:08.187000 audit[3262]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fffb6af2af0 a2=0 a3=7fffb6af2adc items=0 ppid=3127 pid=3262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.187000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 15:27:08.193000 audit[3265]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3265 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:27:08.193000 audit[3265]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffda853c8e0 a2=0 a3=7ffda853c8cc items=0 ppid=3127 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.193000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 15:27:08.195000 audit[3266]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3266 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:27:08.195000 audit[3266]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0b19d5f0 a2=0 a3=7ffd0b19d5dc items=0 ppid=3127 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.195000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 15:27:08.199000 audit[3268]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3268 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 15:27:08.199000 audit[3268]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffda8969cb0 a2=0 a3=7ffda8969c9c items=0 ppid=3127 pid=3268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.199000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 15:27:08.244000 audit[3274]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3274 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:27:08.244000 audit[3274]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd3fe129a0 a2=0 a3=7ffd3fe1298c items=0 ppid=3127 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.244000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:27:08.253000 audit[3274]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3274 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:27:08.253000 audit[3274]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd3fe129a0 a2=0 a3=7ffd3fe1298c items=0 ppid=3127 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.253000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:27:08.260000 audit[3279]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3279 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:27:08.260000 audit[3279]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc47cf3450 a2=0 a3=7ffc47cf343c items=0 ppid=3127 pid=3279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.260000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 15:27:08.270000 audit[3281]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3281 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:27:08.270000 audit[3281]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffeb0b97400 a2=0 a3=7ffeb0b973ec items=0 ppid=3127 pid=3281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.270000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Dec 16 15:27:08.277000 audit[3284]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3284 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:27:08.277000 audit[3284]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd970c2900 a2=0 a3=7ffd970c28ec items=0 ppid=3127 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.277000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Dec 16 15:27:08.279000 audit[3285]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3285 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:27:08.279000 audit[3285]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff34a008c0 a2=0 a3=7fff34a008ac items=0 ppid=3127 pid=3285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.279000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 15:27:08.285000 audit[3287]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3287 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:27:08.285000 audit[3287]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffeec4d3e90 a2=0 a3=7ffeec4d3e7c items=0 ppid=3127 pid=3287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.285000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 15:27:08.288000 audit[3288]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3288 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:27:08.288000 audit[3288]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff441db130 a2=0 a3=7fff441db11c items=0 ppid=3127 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.288000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 15:27:08.292000 audit[3290]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3290 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:27:08.292000 audit[3290]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd12ef9a90 a2=0 a3=7ffd12ef9a7c items=0 ppid=3127 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.292000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 15:27:08.299000 audit[3293]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3293 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:27:08.299000 audit[3293]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd7dcdab60 a2=0 a3=7ffd7dcdab4c items=0 ppid=3127 pid=3293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.299000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 15:27:08.302000 audit[3294]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3294 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:27:08.302000 audit[3294]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc92651450 a2=0 a3=7ffc9265143c items=0 ppid=3127 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.302000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 15:27:08.306000 audit[3296]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3296 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:27:08.306000 audit[3296]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcbdd81580 a2=0 a3=7ffcbdd8156c items=0 ppid=3127 pid=3296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.306000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 15:27:08.307000 audit[3297]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3297 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:27:08.307000 audit[3297]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe4a81d7e0 a2=0 a3=7ffe4a81d7cc items=0 ppid=3127 pid=3297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.307000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 15:27:08.314000 audit[3299]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3299 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:27:08.314000 audit[3299]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdd2c34600 a2=0 a3=7ffdd2c345ec items=0 ppid=3127 pid=3299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.314000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Dec 16 15:27:08.322000 audit[3302]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3302 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:27:08.322000 audit[3302]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffca773e960 a2=0 a3=7ffca773e94c items=0 ppid=3127 pid=3302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.322000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Dec 16 15:27:08.332000 audit[3305]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3305 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:27:08.332000 audit[3305]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe8a705860 a2=0 a3=7ffe8a70584c items=0 ppid=3127 pid=3305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.332000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Dec 16 15:27:08.335000 audit[3306]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3306 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:27:08.335000 audit[3306]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffff0c12f00 a2=0 a3=7ffff0c12eec items=0 ppid=3127 pid=3306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.335000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 15:27:08.339000 audit[3308]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3308 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:27:08.339000 audit[3308]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe4f1ad550 a2=0 a3=7ffe4f1ad53c items=0 ppid=3127 pid=3308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.339000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 15:27:08.345000 audit[3311]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3311 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:27:08.345000 audit[3311]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe0af31530 a2=0 a3=7ffe0af3151c items=0 ppid=3127 pid=3311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.345000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 15:27:08.348000 audit[3312]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3312 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:27:08.348000 audit[3312]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffffed39bd0 a2=0 a3=7ffffed39bbc items=0 ppid=3127 pid=3312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.348000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 15:27:08.357000 audit[3314]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3314 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:27:08.357000 audit[3314]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffcf0fd4e60 a2=0 a3=7ffcf0fd4e4c items=0 ppid=3127 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.357000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 15:27:08.359000 audit[3315]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3315 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:27:08.359000 audit[3315]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffbf8b1e70 a2=0 a3=7fffbf8b1e5c items=0 ppid=3127 pid=3315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.359000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 15:27:08.374244 kubelet[3007]: I1216 15:27:08.373481 3007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9bhs2" podStartSLOduration=2.373442432 podStartE2EDuration="2.373442432s" podCreationTimestamp="2025-12-16 15:27:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 15:27:07.424588108 +0000 UTC m=+6.434523249" watchObservedRunningTime="2025-12-16 15:27:08.373442432 +0000 UTC m=+7.383377559" Dec 16 15:27:08.373000 audit[3317]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3317 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:27:08.373000 audit[3317]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc4219e580 a2=0 a3=7ffc4219e56c items=0 ppid=3127 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.373000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 15:27:08.381000 audit[3320]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3320 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 15:27:08.381000 audit[3320]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe32c7a090 a2=0 a3=7ffe32c7a07c items=0 ppid=3127 pid=3320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.381000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 15:27:08.388000 audit[3322]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3322 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 15:27:08.388000 audit[3322]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fff89a2f270 a2=0 a3=7fff89a2f25c items=0 ppid=3127 pid=3322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.388000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:27:08.389000 audit[3322]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3322 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 15:27:08.389000 audit[3322]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fff89a2f270 a2=0 a3=7fff89a2f25c items=0 ppid=3127 pid=3322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:08.389000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:27:08.492420 sshd[3062]: PAM: Permission denied for root from 193.46.255.20 Dec 16 15:27:08.571714 sshd-session[3325]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.20 user=root Dec 16 15:27:08.570000 audit[3325]: USER_AUTH pid=3325 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/lib64/misc/sshd-session" hostname=193.46.255.20 addr=193.46.255.20 terminal=ssh res=failed' Dec 16 15:27:09.908303 sshd[3062]: PAM: Permission denied for root from 193.46.255.20 Dec 16 15:27:09.948582 sshd[3062]: Received disconnect from 193.46.255.20 port 43850:11: [preauth] Dec 16 15:27:09.948582 sshd[3062]: Disconnected from authenticating user root 193.46.255.20 port 43850 [preauth] Dec 16 15:27:09.948000 audit[3062]: USER_ERR pid=3062 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=193.46.255.20 addr=193.46.255.20 terminal=ssh res=failed' Dec 16 15:27:09.960100 kernel: kauditd_printk_skb: 209 callbacks suppressed Dec 16 15:27:09.960233 kernel: audit: type=1109 audit(1765898829.948:527): pid=3062 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=193.46.255.20 addr=193.46.255.20 terminal=ssh res=failed' Dec 16 15:27:09.963530 systemd[1]: sshd@10-10.230.25.166:22-193.46.255.20:43850.service: Deactivated successfully. Dec 16 15:27:09.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.25.166:22-193.46.255.20:43850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:27:09.973058 kernel: audit: type=1131 audit(1765898829.963:528): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.25.166:22-193.46.255.20:43850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:27:10.018080 systemd[1]: Started sshd@11-10.230.25.166:22-193.46.255.20:41352.service - OpenSSH per-connection server daemon (193.46.255.20:41352). Dec 16 15:27:10.023546 kernel: audit: type=1130 audit(1765898830.016:529): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.25.166:22-193.46.255.20:41352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:27:10.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.25.166:22-193.46.255.20:41352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:27:10.252979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1537594543.mount: Deactivated successfully. Dec 16 15:27:10.405030 sshd-session[3340]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.20 user=root Dec 16 15:27:10.414931 kernel: audit: type=1100 audit(1765898830.404:530): pid=3340 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/lib64/misc/sshd-session" hostname=193.46.255.20 addr=193.46.255.20 terminal=ssh res=failed' Dec 16 15:27:10.404000 audit[3340]: USER_AUTH pid=3340 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/lib64/misc/sshd-session" hostname=193.46.255.20 addr=193.46.255.20 terminal=ssh res=failed' Dec 16 15:27:11.833503 containerd[1667]: time="2025-12-16T15:27:11.832275751Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Dec 16 15:27:11.837112 containerd[1667]: time="2025-12-16T15:27:11.837071080Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.970313181s" Dec 16 15:27:11.837262 containerd[1667]: time="2025-12-16T15:27:11.837232282Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 16 15:27:11.855546 containerd[1667]: time="2025-12-16T15:27:11.855468361Z" level=info msg="CreateContainer within sandbox \"f302fc5ecb0ec20e90f30658b8c0d6d626c6dac0b1f4cd2ad16d9c069729abd0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 15:27:11.861348 containerd[1667]: time="2025-12-16T15:27:11.861292016Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:27:11.863856 containerd[1667]: time="2025-12-16T15:27:11.863821472Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:27:11.864912 containerd[1667]: time="2025-12-16T15:27:11.864879777Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:27:11.869185 containerd[1667]: time="2025-12-16T15:27:11.869151745Z" level=info msg="Container 67483a8f5ee6047c023d5d2c41b688c95108372c4152bc0d8612f06c6f331336: CDI devices from CRI Config.CDIDevices: []" Dec 16 15:27:11.873427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1693411000.mount: Deactivated successfully. Dec 16 15:27:11.883509 containerd[1667]: time="2025-12-16T15:27:11.883435101Z" level=info msg="CreateContainer within sandbox \"f302fc5ecb0ec20e90f30658b8c0d6d626c6dac0b1f4cd2ad16d9c069729abd0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"67483a8f5ee6047c023d5d2c41b688c95108372c4152bc0d8612f06c6f331336\"" Dec 16 15:27:11.885800 containerd[1667]: time="2025-12-16T15:27:11.885755121Z" level=info msg="StartContainer for \"67483a8f5ee6047c023d5d2c41b688c95108372c4152bc0d8612f06c6f331336\"" Dec 16 15:27:11.888739 containerd[1667]: time="2025-12-16T15:27:11.888436361Z" level=info msg="connecting to shim 67483a8f5ee6047c023d5d2c41b688c95108372c4152bc0d8612f06c6f331336" address="unix:///run/containerd/s/021034904f849c33640dd23ce04f5d50836f6136f60b2a322a6efd4154ce164a" protocol=ttrpc version=3 Dec 16 15:27:11.927906 systemd[1]: Started cri-containerd-67483a8f5ee6047c023d5d2c41b688c95108372c4152bc0d8612f06c6f331336.scope - libcontainer container 67483a8f5ee6047c023d5d2c41b688c95108372c4152bc0d8612f06c6f331336. Dec 16 15:27:11.951000 audit: BPF prog-id=150 op=LOAD Dec 16 15:27:11.955555 kernel: audit: type=1334 audit(1765898831.951:531): prog-id=150 op=LOAD Dec 16 15:27:11.955000 audit: BPF prog-id=151 op=LOAD Dec 16 15:27:11.959836 kernel: audit: type=1334 audit(1765898831.955:532): prog-id=151 op=LOAD Dec 16 15:27:11.959934 kernel: audit: type=1300 audit(1765898831.955:532): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3156 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:11.955000 audit[3341]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3156 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:11.955000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637343833613866356565363034376330323364356432633431623638 Dec 16 15:27:11.964926 kernel: audit: type=1327 audit(1765898831.955:532): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637343833613866356565363034376330323364356432633431623638 Dec 16 15:27:11.955000 audit: BPF prog-id=151 op=UNLOAD Dec 16 15:27:11.968914 kernel: audit: type=1334 audit(1765898831.955:533): prog-id=151 op=UNLOAD Dec 16 15:27:11.955000 audit[3341]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3156 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:11.977170 kernel: audit: type=1300 audit(1765898831.955:533): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3156 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:11.955000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637343833613866356565363034376330323364356432633431623638 Dec 16 15:27:11.956000 audit: BPF prog-id=152 op=LOAD Dec 16 15:27:11.956000 audit[3341]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3156 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:11.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637343833613866356565363034376330323364356432633431623638 Dec 16 15:27:11.956000 audit: BPF prog-id=153 op=LOAD Dec 16 15:27:11.956000 audit[3341]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3156 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:11.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637343833613866356565363034376330323364356432633431623638 Dec 16 15:27:11.956000 audit: BPF prog-id=153 op=UNLOAD Dec 16 15:27:11.956000 audit[3341]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3156 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:11.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637343833613866356565363034376330323364356432633431623638 Dec 16 15:27:11.956000 audit: BPF prog-id=152 op=UNLOAD Dec 16 15:27:11.956000 audit[3341]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3156 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:11.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637343833613866356565363034376330323364356432633431623638 Dec 16 15:27:11.956000 audit: BPF prog-id=154 op=LOAD Dec 16 15:27:11.956000 audit[3341]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3156 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:11.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637343833613866356565363034376330323364356432633431623638 Dec 16 15:27:12.012080 containerd[1667]: time="2025-12-16T15:27:12.011953001Z" level=info msg="StartContainer for \"67483a8f5ee6047c023d5d2c41b688c95108372c4152bc0d8612f06c6f331336\" returns successfully" Dec 16 15:27:12.442631 kubelet[3007]: I1216 15:27:12.441993 3007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-5nctp" podStartSLOduration=1.4691828120000001 podStartE2EDuration="5.441951326s" podCreationTimestamp="2025-12-16 15:27:07 +0000 UTC" firstStartedPulling="2025-12-16 15:27:07.86563876 +0000 UTC m=+6.875573873" lastFinishedPulling="2025-12-16 15:27:11.838407274 +0000 UTC m=+10.848342387" observedRunningTime="2025-12-16 15:27:12.441146926 +0000 UTC m=+11.451082069" watchObservedRunningTime="2025-12-16 15:27:12.441951326 +0000 UTC m=+11.451886455" Dec 16 15:27:12.693271 sshd[3329]: PAM: Permission denied for root from 193.46.255.20 Dec 16 15:27:12.767000 audit[3375]: USER_AUTH pid=3375 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/lib64/misc/sshd-session" hostname=193.46.255.20 addr=193.46.255.20 terminal=ssh res=failed' Dec 16 15:27:12.767945 sshd-session[3375]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.20 user=root Dec 16 15:27:14.655694 sshd[3329]: PAM: Permission denied for root from 193.46.255.20 Dec 16 15:27:14.745952 sshd-session[3377]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.20 user=root Dec 16 15:27:14.745000 audit[3377]: USER_AUTH pid=3377 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/lib64/misc/sshd-session" hostname=193.46.255.20 addr=193.46.255.20 terminal=ssh res=failed' Dec 16 15:27:16.573711 sshd[3329]: PAM: Permission denied for root from 193.46.255.20 Dec 16 15:27:16.614810 sshd[3329]: Received disconnect from 193.46.255.20 port 41352:11: [preauth] Dec 16 15:27:16.614810 sshd[3329]: Disconnected from authenticating user root 193.46.255.20 port 41352 [preauth] Dec 16 15:27:16.615000 audit[3329]: USER_ERR pid=3329 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=193.46.255.20 addr=193.46.255.20 terminal=ssh res=failed' Dec 16 15:27:16.622546 kernel: kauditd_printk_skb: 18 callbacks suppressed Dec 16 15:27:16.622717 kernel: audit: type=1109 audit(1765898836.615:541): pid=3329 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=193.46.255.20 addr=193.46.255.20 terminal=ssh res=failed' Dec 16 15:27:16.619742 systemd[1]: sshd@11-10.230.25.166:22-193.46.255.20:41352.service: Deactivated successfully. Dec 16 15:27:16.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.25.166:22-193.46.255.20:41352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:27:16.628148 kernel: audit: type=1131 audit(1765898836.619:542): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.25.166:22-193.46.255.20:41352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:27:19.377000 audit[1979]: USER_END pid=1979 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 15:27:19.379322 sudo[1979]: pam_unix(sudo:session): session closed for user root Dec 16 15:27:19.388554 kernel: audit: type=1106 audit(1765898839.377:543): pid=1979 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 15:27:19.389000 audit[1979]: CRED_DISP pid=1979 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 15:27:19.401590 kernel: audit: type=1104 audit(1765898839.389:544): pid=1979 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 15:27:19.539174 sshd[1978]: Connection closed by 139.178.89.65 port 47864 Dec 16 15:27:19.541034 sshd-session[1960]: pam_unix(sshd:session): session closed for user core Dec 16 15:27:19.546000 audit[1960]: USER_END pid=1960 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:27:19.555535 kernel: audit: type=1106 audit(1765898839.546:545): pid=1960 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:27:19.552000 audit[1960]: CRED_DISP pid=1960 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:27:19.564648 kernel: audit: type=1104 audit(1765898839.552:546): pid=1960 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:27:19.560293 systemd[1]: sshd@8-10.230.25.166:22-139.178.89.65:47864.service: Deactivated successfully. Dec 16 15:27:19.569668 kernel: audit: type=1131 audit(1765898839.560:547): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.25.166:22-139.178.89.65:47864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:27:19.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.25.166:22-139.178.89.65:47864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:27:19.566469 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 15:27:19.567775 systemd[1]: session-11.scope: Consumed 8.588s CPU time, 153.3M memory peak. Dec 16 15:27:19.574161 systemd-logind[1641]: Session 11 logged out. Waiting for processes to exit. Dec 16 15:27:19.578260 systemd-logind[1641]: Removed session 11. Dec 16 15:27:20.856000 audit[3426]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3426 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:27:20.866599 kernel: audit: type=1325 audit(1765898840.856:548): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3426 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:27:20.856000 audit[3426]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff364bbbf0 a2=0 a3=7fff364bbbdc items=0 ppid=3127 pid=3426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:20.874859 kernel: audit: type=1300 audit(1765898840.856:548): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff364bbbf0 a2=0 a3=7fff364bbbdc items=0 ppid=3127 pid=3426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:20.856000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:27:20.879550 kernel: audit: type=1327 audit(1765898840.856:548): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:27:20.879000 audit[3426]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3426 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:27:20.879000 audit[3426]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff364bbbf0 a2=0 a3=0 items=0 ppid=3127 pid=3426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:20.879000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:27:21.939430 kernel: kauditd_printk_skb: 3 callbacks suppressed Dec 16 15:27:21.939661 kernel: audit: type=1325 audit(1765898841.929:550): table=filter:107 family=2 entries=16 op=nft_register_rule pid=3428 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:27:21.929000 audit[3428]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3428 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:27:21.929000 audit[3428]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdde6b0a40 a2=0 a3=7ffdde6b0a2c items=0 ppid=3127 pid=3428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:21.950808 kernel: audit: type=1300 audit(1765898841.929:550): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdde6b0a40 a2=0 a3=7ffdde6b0a2c items=0 ppid=3127 pid=3428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:21.929000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:27:21.958608 kernel: audit: type=1327 audit(1765898841.929:550): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:27:21.946000 audit[3428]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3428 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:27:21.963560 kernel: audit: type=1325 audit(1765898841.946:551): table=nat:108 family=2 entries=12 op=nft_register_rule pid=3428 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:27:21.946000 audit[3428]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdde6b0a40 a2=0 a3=0 items=0 ppid=3127 pid=3428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:21.970536 kernel: audit: type=1300 audit(1765898841.946:551): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdde6b0a40 a2=0 a3=0 items=0 ppid=3127 pid=3428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:21.946000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:27:21.975546 kernel: audit: type=1327 audit(1765898841.946:551): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:27:24.291000 audit[3431]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3431 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:27:24.300377 kernel: audit: type=1325 audit(1765898844.291:552): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3431 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:27:24.300501 kernel: audit: type=1300 audit(1765898844.291:552): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd14cc9330 a2=0 a3=7ffd14cc931c items=0 ppid=3127 pid=3431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:24.291000 audit[3431]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd14cc9330 a2=0 a3=7ffd14cc931c items=0 ppid=3127 pid=3431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:24.291000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:27:24.309544 kernel: audit: type=1327 audit(1765898844.291:552): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:27:24.309000 audit[3431]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3431 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:27:24.314555 kernel: audit: type=1325 audit(1765898844.309:553): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3431 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:27:24.309000 audit[3431]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd14cc9330 a2=0 a3=0 items=0 ppid=3127 pid=3431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:24.309000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:27:25.397000 audit[3433]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3433 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:27:25.397000 audit[3433]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffcf7bd410 a2=0 a3=7fffcf7bd3fc items=0 ppid=3127 pid=3433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:25.397000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:27:25.402000 audit[3433]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3433 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:27:25.402000 audit[3433]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffcf7bd410 a2=0 a3=0 items=0 ppid=3127 pid=3433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:25.402000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:27:26.651000 audit[3435]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3435 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:27:26.651000 audit[3435]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd02118070 a2=0 a3=7ffd0211805c items=0 ppid=3127 pid=3435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:26.651000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:27:26.662000 audit[3435]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3435 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:27:26.662000 audit[3435]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd02118070 a2=0 a3=0 items=0 ppid=3127 pid=3435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:26.662000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:27:26.699312 systemd[1]: Created slice kubepods-besteffort-pod0f33ebce_5984_4247_ac93_cebbc6408845.slice - libcontainer container kubepods-besteffort-pod0f33ebce_5984_4247_ac93_cebbc6408845.slice. Dec 16 15:27:26.769552 kubelet[3007]: I1216 15:27:26.768764 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f33ebce-5984-4247-ac93-cebbc6408845-tigera-ca-bundle\") pod \"calico-typha-7669744886-69l8d\" (UID: \"0f33ebce-5984-4247-ac93-cebbc6408845\") " pod="calico-system/calico-typha-7669744886-69l8d" Dec 16 15:27:26.769552 kubelet[3007]: I1216 15:27:26.768871 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0f33ebce-5984-4247-ac93-cebbc6408845-typha-certs\") pod \"calico-typha-7669744886-69l8d\" (UID: \"0f33ebce-5984-4247-ac93-cebbc6408845\") " pod="calico-system/calico-typha-7669744886-69l8d" Dec 16 15:27:26.769552 kubelet[3007]: I1216 15:27:26.768910 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tckth\" (UniqueName: \"kubernetes.io/projected/0f33ebce-5984-4247-ac93-cebbc6408845-kube-api-access-tckth\") pod \"calico-typha-7669744886-69l8d\" (UID: \"0f33ebce-5984-4247-ac93-cebbc6408845\") " pod="calico-system/calico-typha-7669744886-69l8d" Dec 16 15:27:26.887100 systemd[1]: Created slice kubepods-besteffort-pod9c0cbbb6_7b63_4027_b375_02015d503187.slice - libcontainer container kubepods-besteffort-pod9c0cbbb6_7b63_4027_b375_02015d503187.slice. Dec 16 15:27:26.970365 kubelet[3007]: I1216 15:27:26.970184 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff8hc\" (UniqueName: \"kubernetes.io/projected/9c0cbbb6-7b63-4027-b375-02015d503187-kube-api-access-ff8hc\") pod \"calico-node-xh4xj\" (UID: \"9c0cbbb6-7b63-4027-b375-02015d503187\") " pod="calico-system/calico-node-xh4xj" Dec 16 15:27:26.970365 kubelet[3007]: I1216 15:27:26.970252 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9c0cbbb6-7b63-4027-b375-02015d503187-cni-bin-dir\") pod \"calico-node-xh4xj\" (UID: \"9c0cbbb6-7b63-4027-b375-02015d503187\") " pod="calico-system/calico-node-xh4xj" Dec 16 15:27:26.970365 kubelet[3007]: I1216 15:27:26.970283 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9c0cbbb6-7b63-4027-b375-02015d503187-cni-log-dir\") pod \"calico-node-xh4xj\" (UID: \"9c0cbbb6-7b63-4027-b375-02015d503187\") " pod="calico-system/calico-node-xh4xj" Dec 16 15:27:26.970365 kubelet[3007]: I1216 15:27:26.970315 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c0cbbb6-7b63-4027-b375-02015d503187-tigera-ca-bundle\") pod \"calico-node-xh4xj\" (UID: \"9c0cbbb6-7b63-4027-b375-02015d503187\") " pod="calico-system/calico-node-xh4xj" Dec 16 15:27:26.970365 kubelet[3007]: I1216 15:27:26.970345 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c0cbbb6-7b63-4027-b375-02015d503187-xtables-lock\") pod \"calico-node-xh4xj\" (UID: \"9c0cbbb6-7b63-4027-b375-02015d503187\") " pod="calico-system/calico-node-xh4xj" Dec 16 15:27:26.970850 kubelet[3007]: I1216 15:27:26.970383 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9c0cbbb6-7b63-4027-b375-02015d503187-flexvol-driver-host\") pod \"calico-node-xh4xj\" (UID: \"9c0cbbb6-7b63-4027-b375-02015d503187\") " pod="calico-system/calico-node-xh4xj" Dec 16 15:27:26.970850 kubelet[3007]: I1216 15:27:26.970437 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9c0cbbb6-7b63-4027-b375-02015d503187-node-certs\") pod \"calico-node-xh4xj\" (UID: \"9c0cbbb6-7b63-4027-b375-02015d503187\") " pod="calico-system/calico-node-xh4xj" Dec 16 15:27:26.970850 kubelet[3007]: I1216 15:27:26.970467 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9c0cbbb6-7b63-4027-b375-02015d503187-var-run-calico\") pod \"calico-node-xh4xj\" (UID: \"9c0cbbb6-7b63-4027-b375-02015d503187\") " pod="calico-system/calico-node-xh4xj" Dec 16 15:27:26.970850 kubelet[3007]: I1216 15:27:26.970494 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9c0cbbb6-7b63-4027-b375-02015d503187-policysync\") pod \"calico-node-xh4xj\" (UID: \"9c0cbbb6-7b63-4027-b375-02015d503187\") " pod="calico-system/calico-node-xh4xj" Dec 16 15:27:26.970850 kubelet[3007]: I1216 15:27:26.970599 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9c0cbbb6-7b63-4027-b375-02015d503187-var-lib-calico\") pod \"calico-node-xh4xj\" (UID: \"9c0cbbb6-7b63-4027-b375-02015d503187\") " pod="calico-system/calico-node-xh4xj" Dec 16 15:27:26.971073 kubelet[3007]: I1216 15:27:26.970647 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c0cbbb6-7b63-4027-b375-02015d503187-lib-modules\") pod \"calico-node-xh4xj\" (UID: \"9c0cbbb6-7b63-4027-b375-02015d503187\") " pod="calico-system/calico-node-xh4xj" Dec 16 15:27:26.971073 kubelet[3007]: I1216 15:27:26.970680 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9c0cbbb6-7b63-4027-b375-02015d503187-cni-net-dir\") pod \"calico-node-xh4xj\" (UID: \"9c0cbbb6-7b63-4027-b375-02015d503187\") " pod="calico-system/calico-node-xh4xj" Dec 16 15:27:27.019925 containerd[1667]: time="2025-12-16T15:27:27.018344108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7669744886-69l8d,Uid:0f33ebce-5984-4247-ac93-cebbc6408845,Namespace:calico-system,Attempt:0,}" Dec 16 15:27:27.083332 kubelet[3007]: E1216 15:27:27.081847 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.083332 kubelet[3007]: W1216 15:27:27.081900 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.083984 kubelet[3007]: E1216 15:27:27.083953 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.092796 kubelet[3007]: E1216 15:27:27.092763 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.093855 kubelet[3007]: W1216 15:27:27.092959 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.094214 kubelet[3007]: E1216 15:27:27.094143 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.095011 containerd[1667]: time="2025-12-16T15:27:27.094961473Z" level=info msg="connecting to shim b4f4fb94694e9e8fc517c03bb774f0d2f013b9ef0d256ba62fe0b20b0d5c4b74" address="unix:///run/containerd/s/2b56e4c1db7001e55081ddd068bc45fac4f4b30cbeb0a31afea4b258d628860d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 15:27:27.113945 kubelet[3007]: E1216 15:27:27.113824 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.113945 kubelet[3007]: W1216 15:27:27.113857 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.113945 kubelet[3007]: E1216 15:27:27.113885 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.175949 systemd[1]: Started cri-containerd-b4f4fb94694e9e8fc517c03bb774f0d2f013b9ef0d256ba62fe0b20b0d5c4b74.scope - libcontainer container b4f4fb94694e9e8fc517c03bb774f0d2f013b9ef0d256ba62fe0b20b0d5c4b74. Dec 16 15:27:27.191190 kubelet[3007]: E1216 15:27:27.190278 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wvd4" podUID="27e89a24-5a1a-4b44-908b-951574a9d075" Dec 16 15:27:27.224707 containerd[1667]: time="2025-12-16T15:27:27.223907594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xh4xj,Uid:9c0cbbb6-7b63-4027-b375-02015d503187,Namespace:calico-system,Attempt:0,}" Dec 16 15:27:27.226201 kubelet[3007]: E1216 15:27:27.226157 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.226457 kubelet[3007]: W1216 15:27:27.226219 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.226457 kubelet[3007]: E1216 15:27:27.226251 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.227026 kubelet[3007]: E1216 15:27:27.226877 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.227026 kubelet[3007]: W1216 15:27:27.226926 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.227026 kubelet[3007]: E1216 15:27:27.226942 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.227498 kubelet[3007]: E1216 15:27:27.227295 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.227498 kubelet[3007]: W1216 15:27:27.227309 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.227498 kubelet[3007]: E1216 15:27:27.227325 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.228248 kubelet[3007]: E1216 15:27:27.228138 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.228248 kubelet[3007]: W1216 15:27:27.228155 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.228248 kubelet[3007]: E1216 15:27:27.228171 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.229191 kubelet[3007]: E1216 15:27:27.228997 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.229191 kubelet[3007]: W1216 15:27:27.229014 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.229191 kubelet[3007]: E1216 15:27:27.229029 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.230652 kubelet[3007]: E1216 15:27:27.230416 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.230652 kubelet[3007]: W1216 15:27:27.230642 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.231183 kubelet[3007]: E1216 15:27:27.230661 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.232051 kubelet[3007]: E1216 15:27:27.231915 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.232051 kubelet[3007]: W1216 15:27:27.231937 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.232051 kubelet[3007]: E1216 15:27:27.231955 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.232416 kubelet[3007]: E1216 15:27:27.232296 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.232416 kubelet[3007]: W1216 15:27:27.232317 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.232416 kubelet[3007]: E1216 15:27:27.232367 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.232781 kubelet[3007]: E1216 15:27:27.232679 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.232781 kubelet[3007]: W1216 15:27:27.232694 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.232781 kubelet[3007]: E1216 15:27:27.232744 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.233282 kubelet[3007]: E1216 15:27:27.233258 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.233282 kubelet[3007]: W1216 15:27:27.233277 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.233576 kubelet[3007]: E1216 15:27:27.233294 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.233576 kubelet[3007]: E1216 15:27:27.233507 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.233576 kubelet[3007]: W1216 15:27:27.233561 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.233576 kubelet[3007]: E1216 15:27:27.233576 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.234452 kubelet[3007]: E1216 15:27:27.234429 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.234452 kubelet[3007]: W1216 15:27:27.234449 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.234452 kubelet[3007]: E1216 15:27:27.234465 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.234825 kubelet[3007]: E1216 15:27:27.234743 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.234825 kubelet[3007]: W1216 15:27:27.234758 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.234825 kubelet[3007]: E1216 15:27:27.234771 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.235731 kubelet[3007]: E1216 15:27:27.235234 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.235731 kubelet[3007]: W1216 15:27:27.235249 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.235731 kubelet[3007]: E1216 15:27:27.235264 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.236598 kubelet[3007]: E1216 15:27:27.236567 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.236741 kubelet[3007]: W1216 15:27:27.236601 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.236741 kubelet[3007]: E1216 15:27:27.236622 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.237021 kubelet[3007]: E1216 15:27:27.236873 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.237021 kubelet[3007]: W1216 15:27:27.236891 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.237021 kubelet[3007]: E1216 15:27:27.236906 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.237365 kubelet[3007]: E1216 15:27:27.237163 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.237365 kubelet[3007]: W1216 15:27:27.237178 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.237365 kubelet[3007]: E1216 15:27:27.237192 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.237505 kubelet[3007]: E1216 15:27:27.237432 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.237505 kubelet[3007]: W1216 15:27:27.237445 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.237505 kubelet[3007]: E1216 15:27:27.237458 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.238708 kubelet[3007]: E1216 15:27:27.237708 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.238708 kubelet[3007]: W1216 15:27:27.237729 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.238708 kubelet[3007]: E1216 15:27:27.237743 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.238708 kubelet[3007]: E1216 15:27:27.238001 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.238708 kubelet[3007]: W1216 15:27:27.238014 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.238708 kubelet[3007]: E1216 15:27:27.238029 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.275901 kubelet[3007]: E1216 15:27:27.273282 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.275901 kubelet[3007]: W1216 15:27:27.273314 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.275901 kubelet[3007]: E1216 15:27:27.273359 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.275901 kubelet[3007]: E1216 15:27:27.273879 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.275901 kubelet[3007]: W1216 15:27:27.273942 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.275901 kubelet[3007]: E1216 15:27:27.273961 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.275901 kubelet[3007]: I1216 15:27:27.273415 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/27e89a24-5a1a-4b44-908b-951574a9d075-socket-dir\") pod \"csi-node-driver-7wvd4\" (UID: \"27e89a24-5a1a-4b44-908b-951574a9d075\") " pod="calico-system/csi-node-driver-7wvd4" Dec 16 15:27:27.275901 kubelet[3007]: E1216 15:27:27.274784 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.275901 kubelet[3007]: W1216 15:27:27.274800 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.276416 kubelet[3007]: E1216 15:27:27.274816 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.276416 kubelet[3007]: E1216 15:27:27.275259 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.276416 kubelet[3007]: W1216 15:27:27.275352 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.276416 kubelet[3007]: E1216 15:27:27.275372 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.276416 kubelet[3007]: I1216 15:27:27.276050 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/27e89a24-5a1a-4b44-908b-951574a9d075-varrun\") pod \"csi-node-driver-7wvd4\" (UID: \"27e89a24-5a1a-4b44-908b-951574a9d075\") " pod="calico-system/csi-node-driver-7wvd4" Dec 16 15:27:27.276416 kubelet[3007]: E1216 15:27:27.276011 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.276416 kubelet[3007]: W1216 15:27:27.276098 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.276416 kubelet[3007]: E1216 15:27:27.276137 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.276862 kubelet[3007]: E1216 15:27:27.276737 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.276862 kubelet[3007]: W1216 15:27:27.276753 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.279008 kubelet[3007]: E1216 15:27:27.277166 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.279008 kubelet[3007]: E1216 15:27:27.277483 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.279008 kubelet[3007]: W1216 15:27:27.277497 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.279008 kubelet[3007]: E1216 15:27:27.277559 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.279008 kubelet[3007]: I1216 15:27:27.277908 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4gmh\" (UniqueName: \"kubernetes.io/projected/27e89a24-5a1a-4b44-908b-951574a9d075-kube-api-access-z4gmh\") pod \"csi-node-driver-7wvd4\" (UID: \"27e89a24-5a1a-4b44-908b-951574a9d075\") " pod="calico-system/csi-node-driver-7wvd4" Dec 16 15:27:27.279008 kubelet[3007]: E1216 15:27:27.278836 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.279008 kubelet[3007]: W1216 15:27:27.278969 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.279008 kubelet[3007]: E1216 15:27:27.278986 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.281133 kubelet[3007]: E1216 15:27:27.281059 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.281133 kubelet[3007]: W1216 15:27:27.281080 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.281133 kubelet[3007]: E1216 15:27:27.281097 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.281922 kubelet[3007]: E1216 15:27:27.281849 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.282022 kubelet[3007]: W1216 15:27:27.281875 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.294594 kernel: kauditd_printk_skb: 14 callbacks suppressed Dec 16 15:27:27.294766 kernel: audit: type=1334 audit(1765898847.287:558): prog-id=155 op=LOAD Dec 16 15:27:27.287000 audit: BPF prog-id=155 op=LOAD Dec 16 15:27:27.294964 kubelet[3007]: E1216 15:27:27.281953 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.294964 kubelet[3007]: I1216 15:27:27.282281 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/27e89a24-5a1a-4b44-908b-951574a9d075-kubelet-dir\") pod \"csi-node-driver-7wvd4\" (UID: \"27e89a24-5a1a-4b44-908b-951574a9d075\") " pod="calico-system/csi-node-driver-7wvd4" Dec 16 15:27:27.294964 kubelet[3007]: E1216 15:27:27.283093 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.294964 kubelet[3007]: W1216 15:27:27.283111 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.294964 kubelet[3007]: E1216 15:27:27.283127 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.294964 kubelet[3007]: I1216 15:27:27.283150 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/27e89a24-5a1a-4b44-908b-951574a9d075-registration-dir\") pod \"csi-node-driver-7wvd4\" (UID: \"27e89a24-5a1a-4b44-908b-951574a9d075\") " pod="calico-system/csi-node-driver-7wvd4" Dec 16 15:27:27.294964 kubelet[3007]: E1216 15:27:27.283435 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.294964 kubelet[3007]: W1216 15:27:27.283451 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.295366 kubelet[3007]: E1216 15:27:27.283466 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.295366 kubelet[3007]: E1216 15:27:27.284124 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.295366 kubelet[3007]: W1216 15:27:27.284139 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.295366 kubelet[3007]: E1216 15:27:27.284162 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.295366 kubelet[3007]: E1216 15:27:27.284844 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.295366 kubelet[3007]: W1216 15:27:27.284859 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.295366 kubelet[3007]: E1216 15:27:27.284886 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.295366 kubelet[3007]: E1216 15:27:27.285347 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.295366 kubelet[3007]: W1216 15:27:27.285362 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.295366 kubelet[3007]: E1216 15:27:27.285377 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.311114 kernel: audit: type=1334 audit(1765898847.297:559): prog-id=156 op=LOAD Dec 16 15:27:27.311396 kernel: audit: type=1300 audit(1765898847.297:559): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3449 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:27.297000 audit: BPF prog-id=156 op=LOAD Dec 16 15:27:27.297000 audit[3464]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3449 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:27.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234663466623934363934653965386663353137633033626237373466 Dec 16 15:27:27.323664 kernel: audit: type=1327 audit(1765898847.297:559): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234663466623934363934653965386663353137633033626237373466 Dec 16 15:27:27.297000 audit: BPF prog-id=156 op=UNLOAD Dec 16 15:27:27.332569 kernel: audit: type=1334 audit(1765898847.297:560): prog-id=156 op=UNLOAD Dec 16 15:27:27.297000 audit[3464]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3449 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:27.339644 kernel: audit: type=1300 audit(1765898847.297:560): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3449 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:27.344055 containerd[1667]: time="2025-12-16T15:27:27.344006908Z" level=info msg="connecting to shim 151d76a350eadc3767b81e38d6b4e44ebfa991730600613bd5c06526630025c6" address="unix:///run/containerd/s/15ab7979b12ff74eeda450d0b6978db7bb83acb37b8e31567d4143868c9224bd" namespace=k8s.io protocol=ttrpc version=3 Dec 16 15:27:27.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234663466623934363934653965386663353137633033626237373466 Dec 16 15:27:27.353140 kernel: audit: type=1327 audit(1765898847.297:560): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234663466623934363934653965386663353137633033626237373466 Dec 16 15:27:27.297000 audit: BPF prog-id=157 op=LOAD Dec 16 15:27:27.358556 kernel: audit: type=1334 audit(1765898847.297:561): prog-id=157 op=LOAD Dec 16 15:27:27.297000 audit[3464]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3449 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:27.365606 kernel: audit: type=1300 audit(1765898847.297:561): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3449 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:27.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234663466623934363934653965386663353137633033626237373466 Dec 16 15:27:27.379556 kernel: audit: type=1327 audit(1765898847.297:561): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234663466623934363934653965386663353137633033626237373466 Dec 16 15:27:27.297000 audit: BPF prog-id=158 op=LOAD Dec 16 15:27:27.297000 audit[3464]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3449 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:27.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234663466623934363934653965386663353137633033626237373466 Dec 16 15:27:27.298000 audit: BPF prog-id=158 op=UNLOAD Dec 16 15:27:27.298000 audit[3464]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3449 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:27.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234663466623934363934653965386663353137633033626237373466 Dec 16 15:27:27.298000 audit: BPF prog-id=157 op=UNLOAD Dec 16 15:27:27.298000 audit[3464]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3449 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:27.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234663466623934363934653965386663353137633033626237373466 Dec 16 15:27:27.298000 audit: BPF prog-id=159 op=LOAD Dec 16 15:27:27.298000 audit[3464]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3449 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:27.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234663466623934363934653965386663353137633033626237373466 Dec 16 15:27:27.386217 kubelet[3007]: E1216 15:27:27.386025 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.386217 kubelet[3007]: W1216 15:27:27.386054 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.386217 kubelet[3007]: E1216 15:27:27.386079 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.387104 kubelet[3007]: E1216 15:27:27.387038 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.387348 kubelet[3007]: W1216 15:27:27.387254 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.387348 kubelet[3007]: E1216 15:27:27.387304 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.389875 kubelet[3007]: E1216 15:27:27.389825 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.390134 kubelet[3007]: W1216 15:27:27.389981 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.390295 kubelet[3007]: E1216 15:27:27.390250 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.391314 kubelet[3007]: E1216 15:27:27.391203 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.391572 kubelet[3007]: W1216 15:27:27.391478 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.391875 kubelet[3007]: E1216 15:27:27.391506 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.393806 kubelet[3007]: E1216 15:27:27.393745 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.393806 kubelet[3007]: W1216 15:27:27.393769 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.394116 kubelet[3007]: E1216 15:27:27.393786 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.394984 kubelet[3007]: E1216 15:27:27.394911 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.394984 kubelet[3007]: W1216 15:27:27.394934 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.394984 kubelet[3007]: E1216 15:27:27.394951 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.395936 kubelet[3007]: E1216 15:27:27.395908 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.395936 kubelet[3007]: W1216 15:27:27.395930 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.396069 kubelet[3007]: E1216 15:27:27.395949 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.396655 kubelet[3007]: E1216 15:27:27.396629 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.396655 kubelet[3007]: W1216 15:27:27.396649 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.396890 kubelet[3007]: E1216 15:27:27.396666 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.397717 kubelet[3007]: E1216 15:27:27.397625 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.397717 kubelet[3007]: W1216 15:27:27.397646 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.397717 kubelet[3007]: E1216 15:27:27.397662 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.398892 kubelet[3007]: E1216 15:27:27.398806 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.398892 kubelet[3007]: W1216 15:27:27.398838 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.398892 kubelet[3007]: E1216 15:27:27.398854 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.400074 kubelet[3007]: E1216 15:27:27.399931 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.400074 kubelet[3007]: W1216 15:27:27.399951 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.400074 kubelet[3007]: E1216 15:27:27.399971 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.401345 kubelet[3007]: E1216 15:27:27.401226 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.401345 kubelet[3007]: W1216 15:27:27.401246 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.401345 kubelet[3007]: E1216 15:27:27.401263 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.402444 kubelet[3007]: E1216 15:27:27.402358 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.402444 kubelet[3007]: W1216 15:27:27.402378 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.402444 kubelet[3007]: E1216 15:27:27.402394 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.403252 kubelet[3007]: E1216 15:27:27.403001 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.403252 kubelet[3007]: W1216 15:27:27.403020 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.403252 kubelet[3007]: E1216 15:27:27.403049 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.404645 kubelet[3007]: E1216 15:27:27.404607 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.404645 kubelet[3007]: W1216 15:27:27.404627 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.404645 kubelet[3007]: E1216 15:27:27.404643 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.405217 kubelet[3007]: E1216 15:27:27.405143 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.405217 kubelet[3007]: W1216 15:27:27.405163 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.405217 kubelet[3007]: E1216 15:27:27.405179 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.406701 kubelet[3007]: E1216 15:27:27.406621 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.406701 kubelet[3007]: W1216 15:27:27.406640 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.406701 kubelet[3007]: E1216 15:27:27.406656 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.406943 kubelet[3007]: E1216 15:27:27.406913 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.406943 kubelet[3007]: W1216 15:27:27.406927 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.407360 kubelet[3007]: E1216 15:27:27.406942 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.408107 kubelet[3007]: E1216 15:27:27.407974 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.408107 kubelet[3007]: W1216 15:27:27.407995 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.408107 kubelet[3007]: E1216 15:27:27.408011 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.409838 kubelet[3007]: E1216 15:27:27.409712 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.409838 kubelet[3007]: W1216 15:27:27.409734 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.409838 kubelet[3007]: E1216 15:27:27.409751 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.410010 kubelet[3007]: E1216 15:27:27.409986 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.410010 kubelet[3007]: W1216 15:27:27.410001 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.410093 kubelet[3007]: E1216 15:27:27.410018 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.411448 kubelet[3007]: E1216 15:27:27.411340 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.411448 kubelet[3007]: W1216 15:27:27.411361 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.411448 kubelet[3007]: E1216 15:27:27.411378 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.412276 kubelet[3007]: E1216 15:27:27.411711 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.412276 kubelet[3007]: W1216 15:27:27.411726 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.412276 kubelet[3007]: E1216 15:27:27.411740 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.414318 kubelet[3007]: E1216 15:27:27.413684 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.414318 kubelet[3007]: W1216 15:27:27.413706 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.414318 kubelet[3007]: E1216 15:27:27.413723 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.414318 kubelet[3007]: E1216 15:27:27.414103 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.414318 kubelet[3007]: W1216 15:27:27.414118 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.414318 kubelet[3007]: E1216 15:27:27.414133 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.446792 systemd[1]: Started cri-containerd-151d76a350eadc3767b81e38d6b4e44ebfa991730600613bd5c06526630025c6.scope - libcontainer container 151d76a350eadc3767b81e38d6b4e44ebfa991730600613bd5c06526630025c6. Dec 16 15:27:27.470449 kubelet[3007]: E1216 15:27:27.470325 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:27.470449 kubelet[3007]: W1216 15:27:27.470359 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:27.470449 kubelet[3007]: E1216 15:27:27.470388 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:27.548000 audit: BPF prog-id=160 op=LOAD Dec 16 15:27:27.552000 audit: BPF prog-id=161 op=LOAD Dec 16 15:27:27.552000 audit[3554]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3537 pid=3554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:27.552000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135316437366133353065616463333736376238316533386436623465 Dec 16 15:27:27.552000 audit: BPF prog-id=161 op=UNLOAD Dec 16 15:27:27.552000 audit[3554]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3537 pid=3554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:27.552000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135316437366133353065616463333736376238316533386436623465 Dec 16 15:27:27.556000 audit: BPF prog-id=162 op=LOAD Dec 16 15:27:27.556000 audit[3554]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3537 pid=3554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:27.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135316437366133353065616463333736376238316533386436623465 Dec 16 15:27:27.556000 audit: BPF prog-id=163 op=LOAD Dec 16 15:27:27.556000 audit[3554]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3537 pid=3554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:27.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135316437366133353065616463333736376238316533386436623465 Dec 16 15:27:27.564000 audit: BPF prog-id=163 op=UNLOAD Dec 16 15:27:27.564000 audit[3554]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3537 pid=3554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:27.564000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135316437366133353065616463333736376238316533386436623465 Dec 16 15:27:27.564000 audit: BPF prog-id=162 op=UNLOAD Dec 16 15:27:27.564000 audit[3554]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3537 pid=3554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:27.564000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135316437366133353065616463333736376238316533386436623465 Dec 16 15:27:27.565000 audit: BPF prog-id=164 op=LOAD Dec 16 15:27:27.565000 audit[3554]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3537 pid=3554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:27.565000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135316437366133353065616463333736376238316533386436623465 Dec 16 15:27:27.595483 containerd[1667]: time="2025-12-16T15:27:27.595326940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7669744886-69l8d,Uid:0f33ebce-5984-4247-ac93-cebbc6408845,Namespace:calico-system,Attempt:0,} returns sandbox id \"b4f4fb94694e9e8fc517c03bb774f0d2f013b9ef0d256ba62fe0b20b0d5c4b74\"" Dec 16 15:27:27.600247 containerd[1667]: time="2025-12-16T15:27:27.600194547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 15:27:27.635502 containerd[1667]: time="2025-12-16T15:27:27.635302071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xh4xj,Uid:9c0cbbb6-7b63-4027-b375-02015d503187,Namespace:calico-system,Attempt:0,} returns sandbox id \"151d76a350eadc3767b81e38d6b4e44ebfa991730600613bd5c06526630025c6\"" Dec 16 15:27:27.680000 audit[3609]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=3609 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:27:27.680000 audit[3609]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe01c574d0 a2=0 a3=7ffe01c574bc items=0 ppid=3127 pid=3609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:27.680000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:27:27.686000 audit[3609]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3609 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:27:27.686000 audit[3609]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe01c574d0 a2=0 a3=0 items=0 ppid=3127 pid=3609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:27.686000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:27:29.335329 kubelet[3007]: E1216 15:27:29.335268 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wvd4" podUID="27e89a24-5a1a-4b44-908b-951574a9d075" Dec 16 15:27:29.371461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1786531747.mount: Deactivated successfully. Dec 16 15:27:31.336543 kubelet[3007]: E1216 15:27:31.336351 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wvd4" podUID="27e89a24-5a1a-4b44-908b-951574a9d075" Dec 16 15:27:31.493808 containerd[1667]: time="2025-12-16T15:27:31.493741305Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:27:31.495850 containerd[1667]: time="2025-12-16T15:27:31.495811713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35230631" Dec 16 15:27:31.501015 containerd[1667]: time="2025-12-16T15:27:31.500907253Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:27:31.506715 containerd[1667]: time="2025-12-16T15:27:31.506345032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:27:31.508002 containerd[1667]: time="2025-12-16T15:27:31.507394797Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.90713346s" Dec 16 15:27:31.508002 containerd[1667]: time="2025-12-16T15:27:31.507441704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 16 15:27:31.510354 containerd[1667]: time="2025-12-16T15:27:31.510206304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 15:27:31.549028 containerd[1667]: time="2025-12-16T15:27:31.548973477Z" level=info msg="CreateContainer within sandbox \"b4f4fb94694e9e8fc517c03bb774f0d2f013b9ef0d256ba62fe0b20b0d5c4b74\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 15:27:31.562895 containerd[1667]: time="2025-12-16T15:27:31.562835465Z" level=info msg="Container 4c71383ec93573c7f6a5d03dd54641fe173a6228fc7f17822387735a6fec72e0: CDI devices from CRI Config.CDIDevices: []" Dec 16 15:27:31.577353 containerd[1667]: time="2025-12-16T15:27:31.577180601Z" level=info msg="CreateContainer within sandbox \"b4f4fb94694e9e8fc517c03bb774f0d2f013b9ef0d256ba62fe0b20b0d5c4b74\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4c71383ec93573c7f6a5d03dd54641fe173a6228fc7f17822387735a6fec72e0\"" Dec 16 15:27:31.578923 containerd[1667]: time="2025-12-16T15:27:31.578096182Z" level=info msg="StartContainer for \"4c71383ec93573c7f6a5d03dd54641fe173a6228fc7f17822387735a6fec72e0\"" Dec 16 15:27:31.580453 containerd[1667]: time="2025-12-16T15:27:31.580408653Z" level=info msg="connecting to shim 4c71383ec93573c7f6a5d03dd54641fe173a6228fc7f17822387735a6fec72e0" address="unix:///run/containerd/s/2b56e4c1db7001e55081ddd068bc45fac4f4b30cbeb0a31afea4b258d628860d" protocol=ttrpc version=3 Dec 16 15:27:31.656948 systemd[1]: Started cri-containerd-4c71383ec93573c7f6a5d03dd54641fe173a6228fc7f17822387735a6fec72e0.scope - libcontainer container 4c71383ec93573c7f6a5d03dd54641fe173a6228fc7f17822387735a6fec72e0. Dec 16 15:27:31.683000 audit: BPF prog-id=165 op=LOAD Dec 16 15:27:31.684000 audit: BPF prog-id=166 op=LOAD Dec 16 15:27:31.684000 audit[3620]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3449 pid=3620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:31.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463373133383365633933353733633766366135643033646435343634 Dec 16 15:27:31.684000 audit: BPF prog-id=166 op=UNLOAD Dec 16 15:27:31.684000 audit[3620]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3449 pid=3620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:31.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463373133383365633933353733633766366135643033646435343634 Dec 16 15:27:31.685000 audit: BPF prog-id=167 op=LOAD Dec 16 15:27:31.685000 audit[3620]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3449 pid=3620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:31.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463373133383365633933353733633766366135643033646435343634 Dec 16 15:27:31.685000 audit: BPF prog-id=168 op=LOAD Dec 16 15:27:31.685000 audit[3620]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3449 pid=3620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:31.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463373133383365633933353733633766366135643033646435343634 Dec 16 15:27:31.685000 audit: BPF prog-id=168 op=UNLOAD Dec 16 15:27:31.685000 audit[3620]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3449 pid=3620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:31.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463373133383365633933353733633766366135643033646435343634 Dec 16 15:27:31.685000 audit: BPF prog-id=167 op=UNLOAD Dec 16 15:27:31.685000 audit[3620]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3449 pid=3620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:31.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463373133383365633933353733633766366135643033646435343634 Dec 16 15:27:31.686000 audit: BPF prog-id=169 op=LOAD Dec 16 15:27:31.686000 audit[3620]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3449 pid=3620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:31.686000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463373133383365633933353733633766366135643033646435343634 Dec 16 15:27:31.772824 containerd[1667]: time="2025-12-16T15:27:31.772752023Z" level=info msg="StartContainer for \"4c71383ec93573c7f6a5d03dd54641fe173a6228fc7f17822387735a6fec72e0\" returns successfully" Dec 16 15:27:32.547249 kubelet[3007]: I1216 15:27:32.547151 3007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7669744886-69l8d" podStartSLOduration=2.63757743 podStartE2EDuration="6.547111909s" podCreationTimestamp="2025-12-16 15:27:26 +0000 UTC" firstStartedPulling="2025-12-16 15:27:27.59973374 +0000 UTC m=+26.609668854" lastFinishedPulling="2025-12-16 15:27:31.509268211 +0000 UTC m=+30.519203333" observedRunningTime="2025-12-16 15:27:32.544794245 +0000 UTC m=+31.554729393" watchObservedRunningTime="2025-12-16 15:27:32.547111909 +0000 UTC m=+31.557047041" Dec 16 15:27:32.580889 kubelet[3007]: E1216 15:27:32.580844 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:32.580889 kubelet[3007]: W1216 15:27:32.580879 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:32.581120 kubelet[3007]: E1216 15:27:32.580913 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:32.581507 kubelet[3007]: E1216 15:27:32.581477 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:32.581507 kubelet[3007]: W1216 15:27:32.581500 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:32.582324 kubelet[3007]: E1216 15:27:32.582292 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:32.582656 kubelet[3007]: E1216 15:27:32.582629 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:32.582656 kubelet[3007]: W1216 15:27:32.582651 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:32.582783 kubelet[3007]: E1216 15:27:32.582668 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:32.583031 kubelet[3007]: E1216 15:27:32.583002 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:32.583031 kubelet[3007]: W1216 15:27:32.583024 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:32.583134 kubelet[3007]: E1216 15:27:32.583041 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:32.583614 kubelet[3007]: E1216 15:27:32.583574 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:32.583614 kubelet[3007]: W1216 15:27:32.583606 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:32.583746 kubelet[3007]: E1216 15:27:32.583623 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:32.584621 kubelet[3007]: E1216 15:27:32.584578 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:32.584621 kubelet[3007]: W1216 15:27:32.584612 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:32.584737 kubelet[3007]: E1216 15:27:32.584630 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:32.584916 kubelet[3007]: E1216 15:27:32.584888 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:32.584916 kubelet[3007]: W1216 15:27:32.584909 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:32.585032 kubelet[3007]: E1216 15:27:32.584924 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:32.585774 kubelet[3007]: E1216 15:27:32.585743 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:32.585774 kubelet[3007]: W1216 15:27:32.585765 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:32.585891 kubelet[3007]: E1216 15:27:32.585782 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:32.586649 kubelet[3007]: E1216 15:27:32.586619 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:32.586649 kubelet[3007]: W1216 15:27:32.586641 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:32.586777 kubelet[3007]: E1216 15:27:32.586657 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:32.586964 kubelet[3007]: E1216 15:27:32.586941 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:32.586964 kubelet[3007]: W1216 15:27:32.586962 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:32.587091 kubelet[3007]: E1216 15:27:32.586978 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:32.587460 kubelet[3007]: E1216 15:27:32.587415 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:32.587460 kubelet[3007]: W1216 15:27:32.587436 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:32.587460 kubelet[3007]: E1216 15:27:32.587455 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:32.588350 kubelet[3007]: E1216 15:27:32.588318 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:32.588350 kubelet[3007]: W1216 15:27:32.588341 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:32.588474 kubelet[3007]: E1216 15:27:32.588357 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:32.588963 kubelet[3007]: E1216 15:27:32.588938 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:32.588963 kubelet[3007]: W1216 15:27:32.588961 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:32.589081 kubelet[3007]: E1216 15:27:32.588977 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:32.589370 kubelet[3007]: E1216 15:27:32.589347 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:32.589370 kubelet[3007]: W1216 15:27:32.589369 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:32.589534 kubelet[3007]: E1216 15:27:32.589386 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:32.590087 kubelet[3007]: E1216 15:27:32.590064 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:32.590087 kubelet[3007]: W1216 15:27:32.590085 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:32.590216 kubelet[3007]: E1216 15:27:32.590102 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:32.608894 kernel: kauditd_printk_skb: 62 callbacks suppressed Dec 16 15:27:32.609136 kernel: audit: type=1325 audit(1765898852.600:584): table=filter:117 family=2 entries=21 op=nft_register_rule pid=3678 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:27:32.600000 audit[3678]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=3678 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:27:32.612552 kernel: audit: type=1300 audit(1765898852.600:584): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc9122aef0 a2=0 a3=7ffc9122aedc items=0 ppid=3127 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:32.600000 audit[3678]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc9122aef0 a2=0 a3=7ffc9122aedc items=0 ppid=3127 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:32.600000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:27:32.618539 kernel: audit: type=1327 audit(1765898852.600:584): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:27:32.625000 audit[3678]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=3678 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:27:32.625000 audit[3678]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc9122aef0 a2=0 a3=7ffc9122aedc items=0 ppid=3127 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:32.631629 kernel: audit: type=1325 audit(1765898852.625:585): table=nat:118 family=2 entries=19 op=nft_register_chain pid=3678 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:27:32.631707 kernel: audit: type=1300 audit(1765898852.625:585): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc9122aef0 a2=0 a3=7ffc9122aedc items=0 ppid=3127 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:32.625000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:27:32.639670 kernel: audit: type=1327 audit(1765898852.625:585): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:27:32.641144 kubelet[3007]: E1216 15:27:32.641103 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:32.641144 kubelet[3007]: W1216 15:27:32.641140 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:32.642077 kubelet[3007]: E1216 15:27:32.641177 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:32.642077 kubelet[3007]: E1216 15:27:32.641525 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:32.642077 kubelet[3007]: W1216 15:27:32.641542 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:32.642077 kubelet[3007]: E1216 15:27:32.641558 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:32.642077 kubelet[3007]: E1216 15:27:32.642064 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:32.643796 kubelet[3007]: W1216 15:27:32.642079 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:32.643796 kubelet[3007]: E1216 15:27:32.642102 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:32.643796 kubelet[3007]: E1216 15:27:32.642486 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:32.643796 kubelet[3007]: W1216 15:27:32.642501 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:32.643796 kubelet[3007]: E1216 15:27:32.642550 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:32.643796 kubelet[3007]: E1216 15:27:32.642855 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:32.643796 kubelet[3007]: W1216 15:27:32.642893 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:32.643796 kubelet[3007]: E1216 15:27:32.642914 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:32.645070 kubelet[3007]: E1216 15:27:32.644264 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:32.645070 kubelet[3007]: W1216 15:27:32.644280 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:32.645070 kubelet[3007]: E1216 15:27:32.644298 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:32.647278 kubelet[3007]: E1216 15:27:32.645884 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:32.647278 kubelet[3007]: W1216 15:27:32.645904 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:32.647278 kubelet[3007]: E1216 15:27:32.645920 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:32.647278 kubelet[3007]: E1216 15:27:32.646991 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:32.647278 kubelet[3007]: W1216 15:27:32.647010 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:32.647278 kubelet[3007]: E1216 15:27:32.647029 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:32.650431 kubelet[3007]: E1216 15:27:32.650014 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:32.650431 kubelet[3007]: W1216 15:27:32.650162 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:32.650431 kubelet[3007]: E1216 15:27:32.650186 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:32.652535 kubelet[3007]: E1216 15:27:32.652352 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:32.652535 kubelet[3007]: W1216 15:27:32.652383 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:32.652535 kubelet[3007]: E1216 15:27:32.652400 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:32.654943 kubelet[3007]: E1216 15:27:32.654902 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:32.655319 kubelet[3007]: W1216 15:27:32.655259 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:32.655749 kubelet[3007]: E1216 15:27:32.655294 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:32.655964 kubelet[3007]: E1216 15:27:32.655937 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:32.656190 kubelet[3007]: W1216 15:27:32.656083 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:32.656190 kubelet[3007]: E1216 15:27:32.656111 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:32.656863 kubelet[3007]: E1216 15:27:32.656830 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:32.656863 kubelet[3007]: W1216 15:27:32.656850 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:32.656863 kubelet[3007]: E1216 15:27:32.656866 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:32.657791 kubelet[3007]: E1216 15:27:32.657753 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:32.657997 kubelet[3007]: W1216 15:27:32.657964 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:32.658077 kubelet[3007]: E1216 15:27:32.658059 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:32.658814 kubelet[3007]: E1216 15:27:32.658785 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:32.658814 kubelet[3007]: W1216 15:27:32.658805 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:32.659076 kubelet[3007]: E1216 15:27:32.658822 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:32.660547 kubelet[3007]: E1216 15:27:32.659613 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:32.660547 kubelet[3007]: W1216 15:27:32.659634 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:32.660547 kubelet[3007]: E1216 15:27:32.659651 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:32.660547 kubelet[3007]: E1216 15:27:32.660501 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:32.660547 kubelet[3007]: W1216 15:27:32.660546 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:32.660822 kubelet[3007]: E1216 15:27:32.660564 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:32.664696 kubelet[3007]: E1216 15:27:32.664672 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:32.664850 kubelet[3007]: W1216 15:27:32.664827 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:32.664983 kubelet[3007]: E1216 15:27:32.664962 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:33.321670 containerd[1667]: time="2025-12-16T15:27:33.320658257Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:27:33.321670 containerd[1667]: time="2025-12-16T15:27:33.321623822Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4442579" Dec 16 15:27:33.322908 containerd[1667]: time="2025-12-16T15:27:33.322873343Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:27:33.325534 containerd[1667]: time="2025-12-16T15:27:33.325487038Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:27:33.326774 containerd[1667]: time="2025-12-16T15:27:33.326735672Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.816447401s" Dec 16 15:27:33.326903 containerd[1667]: time="2025-12-16T15:27:33.326877066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 16 15:27:33.335954 containerd[1667]: time="2025-12-16T15:27:33.334983337Z" level=info msg="CreateContainer within sandbox \"151d76a350eadc3767b81e38d6b4e44ebfa991730600613bd5c06526630025c6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 15:27:33.336278 kubelet[3007]: E1216 15:27:33.335263 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wvd4" podUID="27e89a24-5a1a-4b44-908b-951574a9d075" Dec 16 15:27:33.370927 containerd[1667]: time="2025-12-16T15:27:33.370869663Z" level=info msg="Container 1c29a83440c7ee4eb42ff6b713fa084ff8dbca4409ccb85391f774f21649c50b: CDI devices from CRI Config.CDIDevices: []" Dec 16 15:27:33.376447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3585426238.mount: Deactivated successfully. Dec 16 15:27:33.390348 containerd[1667]: time="2025-12-16T15:27:33.390279392Z" level=info msg="CreateContainer within sandbox \"151d76a350eadc3767b81e38d6b4e44ebfa991730600613bd5c06526630025c6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1c29a83440c7ee4eb42ff6b713fa084ff8dbca4409ccb85391f774f21649c50b\"" Dec 16 15:27:33.391531 containerd[1667]: time="2025-12-16T15:27:33.391471194Z" level=info msg="StartContainer for \"1c29a83440c7ee4eb42ff6b713fa084ff8dbca4409ccb85391f774f21649c50b\"" Dec 16 15:27:33.393803 containerd[1667]: time="2025-12-16T15:27:33.393758120Z" level=info msg="connecting to shim 1c29a83440c7ee4eb42ff6b713fa084ff8dbca4409ccb85391f774f21649c50b" address="unix:///run/containerd/s/15ab7979b12ff74eeda450d0b6978db7bb83acb37b8e31567d4143868c9224bd" protocol=ttrpc version=3 Dec 16 15:27:33.446823 systemd[1]: Started cri-containerd-1c29a83440c7ee4eb42ff6b713fa084ff8dbca4409ccb85391f774f21649c50b.scope - libcontainer container 1c29a83440c7ee4eb42ff6b713fa084ff8dbca4409ccb85391f774f21649c50b. Dec 16 15:27:33.524000 audit: BPF prog-id=170 op=LOAD Dec 16 15:27:33.530545 kernel: audit: type=1334 audit(1765898853.524:586): prog-id=170 op=LOAD Dec 16 15:27:33.524000 audit[3701]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3537 pid=3701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:33.537531 kernel: audit: type=1300 audit(1765898853.524:586): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3537 pid=3701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:33.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163323961383334343063376565346562343266663662373133666130 Dec 16 15:27:33.548578 kernel: audit: type=1327 audit(1765898853.524:586): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163323961383334343063376565346562343266663662373133666130 Dec 16 15:27:33.529000 audit: BPF prog-id=171 op=LOAD Dec 16 15:27:33.529000 audit[3701]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=3537 pid=3701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:33.529000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163323961383334343063376565346562343266663662373133666130 Dec 16 15:27:33.529000 audit: BPF prog-id=171 op=UNLOAD Dec 16 15:27:33.529000 audit[3701]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3537 pid=3701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:33.529000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163323961383334343063376565346562343266663662373133666130 Dec 16 15:27:33.529000 audit: BPF prog-id=170 op=UNLOAD Dec 16 15:27:33.529000 audit[3701]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3537 pid=3701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:33.529000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163323961383334343063376565346562343266663662373133666130 Dec 16 15:27:33.529000 audit: BPF prog-id=172 op=LOAD Dec 16 15:27:33.529000 audit[3701]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=3537 pid=3701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:33.529000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163323961383334343063376565346562343266663662373133666130 Dec 16 15:27:33.551701 kernel: audit: type=1334 audit(1765898853.529:587): prog-id=171 op=LOAD Dec 16 15:27:33.586161 containerd[1667]: time="2025-12-16T15:27:33.585186794Z" level=info msg="StartContainer for \"1c29a83440c7ee4eb42ff6b713fa084ff8dbca4409ccb85391f774f21649c50b\" returns successfully" Dec 16 15:27:33.595841 kubelet[3007]: E1216 15:27:33.595768 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:33.595841 kubelet[3007]: W1216 15:27:33.595833 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:33.597314 kubelet[3007]: E1216 15:27:33.595868 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:33.597314 kubelet[3007]: E1216 15:27:33.596231 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:33.597314 kubelet[3007]: W1216 15:27:33.596248 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:33.597314 kubelet[3007]: E1216 15:27:33.596264 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:33.597314 kubelet[3007]: E1216 15:27:33.596730 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:33.597314 kubelet[3007]: W1216 15:27:33.596745 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:33.597314 kubelet[3007]: E1216 15:27:33.596761 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:33.597314 kubelet[3007]: E1216 15:27:33.597232 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:33.597314 kubelet[3007]: W1216 15:27:33.597259 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:33.597314 kubelet[3007]: E1216 15:27:33.597276 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:33.597805 kubelet[3007]: E1216 15:27:33.597543 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:33.597805 kubelet[3007]: W1216 15:27:33.597557 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:33.597805 kubelet[3007]: E1216 15:27:33.597571 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:33.597950 kubelet[3007]: E1216 15:27:33.597883 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:33.597950 kubelet[3007]: W1216 15:27:33.597897 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:33.597950 kubelet[3007]: E1216 15:27:33.597911 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:33.599358 kubelet[3007]: E1216 15:27:33.598251 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:33.599358 kubelet[3007]: W1216 15:27:33.598289 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:33.599358 kubelet[3007]: E1216 15:27:33.598306 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:33.599358 kubelet[3007]: E1216 15:27:33.598736 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:33.599358 kubelet[3007]: W1216 15:27:33.598751 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:33.599358 kubelet[3007]: E1216 15:27:33.598765 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:33.599358 kubelet[3007]: E1216 15:27:33.599197 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:33.599358 kubelet[3007]: W1216 15:27:33.599213 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:33.599358 kubelet[3007]: E1216 15:27:33.599228 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:33.600124 kubelet[3007]: E1216 15:27:33.599493 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:33.600124 kubelet[3007]: W1216 15:27:33.599556 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:33.600124 kubelet[3007]: E1216 15:27:33.599613 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:33.600124 kubelet[3007]: E1216 15:27:33.599978 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:33.600124 kubelet[3007]: W1216 15:27:33.599993 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:33.600124 kubelet[3007]: E1216 15:27:33.600007 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:33.600373 kubelet[3007]: E1216 15:27:33.600286 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:33.600373 kubelet[3007]: W1216 15:27:33.600319 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:33.600373 kubelet[3007]: E1216 15:27:33.600333 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:33.601288 kubelet[3007]: E1216 15:27:33.600666 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:33.601288 kubelet[3007]: W1216 15:27:33.600687 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:33.601288 kubelet[3007]: E1216 15:27:33.600701 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:33.601288 kubelet[3007]: E1216 15:27:33.601063 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:33.601288 kubelet[3007]: W1216 15:27:33.601077 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:33.601288 kubelet[3007]: E1216 15:27:33.601091 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:33.601578 kubelet[3007]: E1216 15:27:33.601394 3007 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 15:27:33.601578 kubelet[3007]: W1216 15:27:33.601408 3007 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 15:27:33.601578 kubelet[3007]: E1216 15:27:33.601422 3007 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 15:27:33.611915 systemd[1]: cri-containerd-1c29a83440c7ee4eb42ff6b713fa084ff8dbca4409ccb85391f774f21649c50b.scope: Deactivated successfully. Dec 16 15:27:33.616000 audit: BPF prog-id=172 op=UNLOAD Dec 16 15:27:33.637281 containerd[1667]: time="2025-12-16T15:27:33.637209701Z" level=info msg="received container exit event container_id:\"1c29a83440c7ee4eb42ff6b713fa084ff8dbca4409ccb85391f774f21649c50b\" id:\"1c29a83440c7ee4eb42ff6b713fa084ff8dbca4409ccb85391f774f21649c50b\" pid:3714 exited_at:{seconds:1765898853 nanos:614847091}" Dec 16 15:27:33.694803 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c29a83440c7ee4eb42ff6b713fa084ff8dbca4409ccb85391f774f21649c50b-rootfs.mount: Deactivated successfully. Dec 16 15:27:34.561765 containerd[1667]: time="2025-12-16T15:27:34.561712562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 15:27:35.335802 kubelet[3007]: E1216 15:27:35.335694 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wvd4" podUID="27e89a24-5a1a-4b44-908b-951574a9d075" Dec 16 15:27:37.352152 kubelet[3007]: E1216 15:27:37.351944 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wvd4" podUID="27e89a24-5a1a-4b44-908b-951574a9d075" Dec 16 15:27:39.335638 kubelet[3007]: E1216 15:27:39.334864 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wvd4" podUID="27e89a24-5a1a-4b44-908b-951574a9d075" Dec 16 15:27:41.334163 kubelet[3007]: E1216 15:27:41.334094 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wvd4" podUID="27e89a24-5a1a-4b44-908b-951574a9d075" Dec 16 15:27:42.848254 containerd[1667]: time="2025-12-16T15:27:42.848131107Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Dec 16 15:27:42.852207 containerd[1667]: time="2025-12-16T15:27:42.852000268Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 8.290236586s" Dec 16 15:27:42.852207 containerd[1667]: time="2025-12-16T15:27:42.852088001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 16 15:27:42.854004 containerd[1667]: time="2025-12-16T15:27:42.853937701Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:27:42.855347 containerd[1667]: time="2025-12-16T15:27:42.854913516Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:27:42.856702 containerd[1667]: time="2025-12-16T15:27:42.855934469Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:27:42.863168 containerd[1667]: time="2025-12-16T15:27:42.863112669Z" level=info msg="CreateContainer within sandbox \"151d76a350eadc3767b81e38d6b4e44ebfa991730600613bd5c06526630025c6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 15:27:42.880806 containerd[1667]: time="2025-12-16T15:27:42.880754252Z" level=info msg="Container 6f702f07c1cf5b17571f2c0915ceab1568d7e68dbb03d07a70717d2785d04f18: CDI devices from CRI Config.CDIDevices: []" Dec 16 15:27:42.887215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1363976642.mount: Deactivated successfully. Dec 16 15:27:42.897936 containerd[1667]: time="2025-12-16T15:27:42.897775568Z" level=info msg="CreateContainer within sandbox \"151d76a350eadc3767b81e38d6b4e44ebfa991730600613bd5c06526630025c6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6f702f07c1cf5b17571f2c0915ceab1568d7e68dbb03d07a70717d2785d04f18\"" Dec 16 15:27:42.898990 containerd[1667]: time="2025-12-16T15:27:42.898831160Z" level=info msg="StartContainer for \"6f702f07c1cf5b17571f2c0915ceab1568d7e68dbb03d07a70717d2785d04f18\"" Dec 16 15:27:42.902963 containerd[1667]: time="2025-12-16T15:27:42.902884362Z" level=info msg="connecting to shim 6f702f07c1cf5b17571f2c0915ceab1568d7e68dbb03d07a70717d2785d04f18" address="unix:///run/containerd/s/15ab7979b12ff74eeda450d0b6978db7bb83acb37b8e31567d4143868c9224bd" protocol=ttrpc version=3 Dec 16 15:27:42.949979 systemd[1]: Started cri-containerd-6f702f07c1cf5b17571f2c0915ceab1568d7e68dbb03d07a70717d2785d04f18.scope - libcontainer container 6f702f07c1cf5b17571f2c0915ceab1568d7e68dbb03d07a70717d2785d04f18. Dec 16 15:27:43.029000 audit: BPF prog-id=173 op=LOAD Dec 16 15:27:43.033189 kernel: kauditd_printk_skb: 12 callbacks suppressed Dec 16 15:27:43.033813 kernel: audit: type=1334 audit(1765898863.029:592): prog-id=173 op=LOAD Dec 16 15:27:43.029000 audit[3776]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3537 pid=3776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:43.036963 kernel: audit: type=1300 audit(1765898863.029:592): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3537 pid=3776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:43.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666373032663037633163663562313735373166326330393135636561 Dec 16 15:27:43.042138 kernel: audit: type=1327 audit(1765898863.029:592): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666373032663037633163663562313735373166326330393135636561 Dec 16 15:27:43.029000 audit: BPF prog-id=174 op=LOAD Dec 16 15:27:43.049603 kernel: audit: type=1334 audit(1765898863.029:593): prog-id=174 op=LOAD Dec 16 15:27:43.029000 audit[3776]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3537 pid=3776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:43.055537 kernel: audit: type=1300 audit(1765898863.029:593): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3537 pid=3776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:43.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666373032663037633163663562313735373166326330393135636561 Dec 16 15:27:43.031000 audit: BPF prog-id=174 op=UNLOAD Dec 16 15:27:43.063657 kernel: audit: type=1327 audit(1765898863.029:593): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666373032663037633163663562313735373166326330393135636561 Dec 16 15:27:43.063782 kernel: audit: type=1334 audit(1765898863.031:594): prog-id=174 op=UNLOAD Dec 16 15:27:43.031000 audit[3776]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3537 pid=3776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:43.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666373032663037633163663562313735373166326330393135636561 Dec 16 15:27:43.073526 kernel: audit: type=1300 audit(1765898863.031:594): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3537 pid=3776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:43.073617 kernel: audit: type=1327 audit(1765898863.031:594): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666373032663037633163663562313735373166326330393135636561 Dec 16 15:27:43.031000 audit: BPF prog-id=173 op=UNLOAD Dec 16 15:27:43.078290 kernel: audit: type=1334 audit(1765898863.031:595): prog-id=173 op=UNLOAD Dec 16 15:27:43.031000 audit[3776]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3537 pid=3776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:43.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666373032663037633163663562313735373166326330393135636561 Dec 16 15:27:43.031000 audit: BPF prog-id=175 op=LOAD Dec 16 15:27:43.031000 audit[3776]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3537 pid=3776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:43.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666373032663037633163663562313735373166326330393135636561 Dec 16 15:27:43.101077 containerd[1667]: time="2025-12-16T15:27:43.100956572Z" level=info msg="StartContainer for \"6f702f07c1cf5b17571f2c0915ceab1568d7e68dbb03d07a70717d2785d04f18\" returns successfully" Dec 16 15:27:43.339172 kubelet[3007]: E1216 15:27:43.338710 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wvd4" podUID="27e89a24-5a1a-4b44-908b-951574a9d075" Dec 16 15:27:44.094122 systemd[1]: cri-containerd-6f702f07c1cf5b17571f2c0915ceab1568d7e68dbb03d07a70717d2785d04f18.scope: Deactivated successfully. Dec 16 15:27:44.095413 systemd[1]: cri-containerd-6f702f07c1cf5b17571f2c0915ceab1568d7e68dbb03d07a70717d2785d04f18.scope: Consumed 750ms CPU time, 161.8M memory peak, 9.2M read from disk, 171.3M written to disk. Dec 16 15:27:44.098000 audit: BPF prog-id=175 op=UNLOAD Dec 16 15:27:44.103595 containerd[1667]: time="2025-12-16T15:27:44.103545729Z" level=info msg="received container exit event container_id:\"6f702f07c1cf5b17571f2c0915ceab1568d7e68dbb03d07a70717d2785d04f18\" id:\"6f702f07c1cf5b17571f2c0915ceab1568d7e68dbb03d07a70717d2785d04f18\" pid:3789 exited_at:{seconds:1765898864 nanos:98333986}" Dec 16 15:27:44.153041 kubelet[3007]: I1216 15:27:44.152996 3007 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 16 15:27:44.207140 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f702f07c1cf5b17571f2c0915ceab1568d7e68dbb03d07a70717d2785d04f18-rootfs.mount: Deactivated successfully. Dec 16 15:27:44.288779 systemd[1]: Created slice kubepods-burstable-pod8a3ce37e_c24e_49fb_956d_3bca98f84f79.slice - libcontainer container kubepods-burstable-pod8a3ce37e_c24e_49fb_956d_3bca98f84f79.slice. Dec 16 15:27:44.313227 systemd[1]: Created slice kubepods-besteffort-pod7b6af861_64ec_4458_a739_99f9aaa2e0d3.slice - libcontainer container kubepods-besteffort-pod7b6af861_64ec_4458_a739_99f9aaa2e0d3.slice. Dec 16 15:27:44.341449 systemd[1]: Created slice kubepods-besteffort-pod4428d9ef_7f5f_4ea4_93cb_e174a33d2ea2.slice - libcontainer container kubepods-besteffort-pod4428d9ef_7f5f_4ea4_93cb_e174a33d2ea2.slice. Dec 16 15:27:44.344567 kubelet[3007]: I1216 15:27:44.342777 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7b6af861-64ec-4458-a739-99f9aaa2e0d3-whisker-backend-key-pair\") pod \"whisker-54b6d84d89-rmmv2\" (UID: \"7b6af861-64ec-4458-a739-99f9aaa2e0d3\") " pod="calico-system/whisker-54b6d84d89-rmmv2" Dec 16 15:27:44.344567 kubelet[3007]: I1216 15:27:44.342834 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfgrl\" (UniqueName: \"kubernetes.io/projected/7b6af861-64ec-4458-a739-99f9aaa2e0d3-kube-api-access-dfgrl\") pod \"whisker-54b6d84d89-rmmv2\" (UID: \"7b6af861-64ec-4458-a739-99f9aaa2e0d3\") " pod="calico-system/whisker-54b6d84d89-rmmv2" Dec 16 15:27:44.344567 kubelet[3007]: I1216 15:27:44.342868 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a3ce37e-c24e-49fb-956d-3bca98f84f79-config-volume\") pod \"coredns-66bc5c9577-kpx8g\" (UID: \"8a3ce37e-c24e-49fb-956d-3bca98f84f79\") " pod="kube-system/coredns-66bc5c9577-kpx8g" Dec 16 15:27:44.344567 kubelet[3007]: I1216 15:27:44.342897 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b6af861-64ec-4458-a739-99f9aaa2e0d3-whisker-ca-bundle\") pod \"whisker-54b6d84d89-rmmv2\" (UID: \"7b6af861-64ec-4458-a739-99f9aaa2e0d3\") " pod="calico-system/whisker-54b6d84d89-rmmv2" Dec 16 15:27:44.344567 kubelet[3007]: I1216 15:27:44.342929 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nchjp\" (UniqueName: \"kubernetes.io/projected/8a3ce37e-c24e-49fb-956d-3bca98f84f79-kube-api-access-nchjp\") pod \"coredns-66bc5c9577-kpx8g\" (UID: \"8a3ce37e-c24e-49fb-956d-3bca98f84f79\") " pod="kube-system/coredns-66bc5c9577-kpx8g" Dec 16 15:27:44.358864 systemd[1]: Created slice kubepods-burstable-pod54ee0443_31bd_4488_a38f_608c67dce5d8.slice - libcontainer container kubepods-burstable-pod54ee0443_31bd_4488_a38f_608c67dce5d8.slice. Dec 16 15:27:44.371679 systemd[1]: Created slice kubepods-besteffort-podfa623572_3c69_4396_806f_a142b1ffa21a.slice - libcontainer container kubepods-besteffort-podfa623572_3c69_4396_806f_a142b1ffa21a.slice. Dec 16 15:27:44.384792 systemd[1]: Created slice kubepods-besteffort-pod42b32087_b938_4963_9aa0_ab40f5c370b3.slice - libcontainer container kubepods-besteffort-pod42b32087_b938_4963_9aa0_ab40f5c370b3.slice. Dec 16 15:27:44.396008 systemd[1]: Created slice kubepods-besteffort-pod5a7920f3_ed03_480b_921f_a7a3eaa95ad5.slice - libcontainer container kubepods-besteffort-pod5a7920f3_ed03_480b_921f_a7a3eaa95ad5.slice. Dec 16 15:27:44.410233 systemd[1]: Created slice kubepods-besteffort-pod71c27099_4499_4f5b_8630_5d35b5c1100b.slice - libcontainer container kubepods-besteffort-pod71c27099_4499_4f5b_8630_5d35b5c1100b.slice. Dec 16 15:27:44.444940 kubelet[3007]: I1216 15:27:44.444712 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh5zp\" (UniqueName: \"kubernetes.io/projected/4428d9ef-7f5f-4ea4-93cb-e174a33d2ea2-kube-api-access-kh5zp\") pod \"calico-apiserver-b4bf85fc-lt27n\" (UID: \"4428d9ef-7f5f-4ea4-93cb-e174a33d2ea2\") " pod="calico-apiserver/calico-apiserver-b4bf85fc-lt27n" Dec 16 15:27:44.445627 kubelet[3007]: I1216 15:27:44.445208 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5a7920f3-ed03-480b-921f-a7a3eaa95ad5-calico-apiserver-certs\") pod \"calico-apiserver-7c8d6c5b45-zv6vg\" (UID: \"5a7920f3-ed03-480b-921f-a7a3eaa95ad5\") " pod="calico-apiserver/calico-apiserver-7c8d6c5b45-zv6vg" Dec 16 15:27:44.446104 kubelet[3007]: I1216 15:27:44.446026 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm2vl\" (UniqueName: \"kubernetes.io/projected/5a7920f3-ed03-480b-921f-a7a3eaa95ad5-kube-api-access-lm2vl\") pod \"calico-apiserver-7c8d6c5b45-zv6vg\" (UID: \"5a7920f3-ed03-480b-921f-a7a3eaa95ad5\") " pod="calico-apiserver/calico-apiserver-7c8d6c5b45-zv6vg" Dec 16 15:27:44.446545 kubelet[3007]: I1216 15:27:44.446399 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/42b32087-b938-4963-9aa0-ab40f5c370b3-calico-apiserver-certs\") pod \"calico-apiserver-b4bf85fc-bgpbx\" (UID: \"42b32087-b938-4963-9aa0-ab40f5c370b3\") " pod="calico-apiserver/calico-apiserver-b4bf85fc-bgpbx" Dec 16 15:27:44.446545 kubelet[3007]: I1216 15:27:44.446458 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/fa623572-3c69-4396-806f-a142b1ffa21a-goldmane-key-pair\") pod \"goldmane-7c778bb748-gjpms\" (UID: \"fa623572-3c69-4396-806f-a142b1ffa21a\") " pod="calico-system/goldmane-7c778bb748-gjpms" Dec 16 15:27:44.446545 kubelet[3007]: I1216 15:27:44.446488 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2crd7\" (UniqueName: \"kubernetes.io/projected/fa623572-3c69-4396-806f-a142b1ffa21a-kube-api-access-2crd7\") pod \"goldmane-7c778bb748-gjpms\" (UID: \"fa623572-3c69-4396-806f-a142b1ffa21a\") " pod="calico-system/goldmane-7c778bb748-gjpms" Dec 16 15:27:44.446545 kubelet[3007]: I1216 15:27:44.446543 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-659qn\" (UniqueName: \"kubernetes.io/projected/54ee0443-31bd-4488-a38f-608c67dce5d8-kube-api-access-659qn\") pod \"coredns-66bc5c9577-zkpwr\" (UID: \"54ee0443-31bd-4488-a38f-608c67dce5d8\") " pod="kube-system/coredns-66bc5c9577-zkpwr" Dec 16 15:27:44.446796 kubelet[3007]: I1216 15:27:44.446579 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71c27099-4499-4f5b-8630-5d35b5c1100b-tigera-ca-bundle\") pod \"calico-kube-controllers-75675d747f-2cjwb\" (UID: \"71c27099-4499-4f5b-8630-5d35b5c1100b\") " pod="calico-system/calico-kube-controllers-75675d747f-2cjwb" Dec 16 15:27:44.446796 kubelet[3007]: I1216 15:27:44.446605 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa623572-3c69-4396-806f-a142b1ffa21a-config\") pod \"goldmane-7c778bb748-gjpms\" (UID: \"fa623572-3c69-4396-806f-a142b1ffa21a\") " pod="calico-system/goldmane-7c778bb748-gjpms" Dec 16 15:27:44.446796 kubelet[3007]: I1216 15:27:44.446653 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4428d9ef-7f5f-4ea4-93cb-e174a33d2ea2-calico-apiserver-certs\") pod \"calico-apiserver-b4bf85fc-lt27n\" (UID: \"4428d9ef-7f5f-4ea4-93cb-e174a33d2ea2\") " pod="calico-apiserver/calico-apiserver-b4bf85fc-lt27n" Dec 16 15:27:44.446796 kubelet[3007]: I1216 15:27:44.446712 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfbsw\" (UniqueName: \"kubernetes.io/projected/42b32087-b938-4963-9aa0-ab40f5c370b3-kube-api-access-vfbsw\") pod \"calico-apiserver-b4bf85fc-bgpbx\" (UID: \"42b32087-b938-4963-9aa0-ab40f5c370b3\") " pod="calico-apiserver/calico-apiserver-b4bf85fc-bgpbx" Dec 16 15:27:44.446796 kubelet[3007]: I1216 15:27:44.446741 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gc2v\" (UniqueName: \"kubernetes.io/projected/71c27099-4499-4f5b-8630-5d35b5c1100b-kube-api-access-7gc2v\") pod \"calico-kube-controllers-75675d747f-2cjwb\" (UID: \"71c27099-4499-4f5b-8630-5d35b5c1100b\") " pod="calico-system/calico-kube-controllers-75675d747f-2cjwb" Dec 16 15:27:44.447014 kubelet[3007]: I1216 15:27:44.446787 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa623572-3c69-4396-806f-a142b1ffa21a-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-gjpms\" (UID: \"fa623572-3c69-4396-806f-a142b1ffa21a\") " pod="calico-system/goldmane-7c778bb748-gjpms" Dec 16 15:27:44.447014 kubelet[3007]: I1216 15:27:44.446817 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54ee0443-31bd-4488-a38f-608c67dce5d8-config-volume\") pod \"coredns-66bc5c9577-zkpwr\" (UID: \"54ee0443-31bd-4488-a38f-608c67dce5d8\") " pod="kube-system/coredns-66bc5c9577-zkpwr" Dec 16 15:27:44.610407 containerd[1667]: time="2025-12-16T15:27:44.609883980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kpx8g,Uid:8a3ce37e-c24e-49fb-956d-3bca98f84f79,Namespace:kube-system,Attempt:0,}" Dec 16 15:27:44.655976 containerd[1667]: time="2025-12-16T15:27:44.655921089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54b6d84d89-rmmv2,Uid:7b6af861-64ec-4458-a739-99f9aaa2e0d3,Namespace:calico-system,Attempt:0,}" Dec 16 15:27:44.668910 containerd[1667]: time="2025-12-16T15:27:44.668851302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zkpwr,Uid:54ee0443-31bd-4488-a38f-608c67dce5d8,Namespace:kube-system,Attempt:0,}" Dec 16 15:27:44.675463 containerd[1667]: time="2025-12-16T15:27:44.675388296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b4bf85fc-lt27n,Uid:4428d9ef-7f5f-4ea4-93cb-e174a33d2ea2,Namespace:calico-apiserver,Attempt:0,}" Dec 16 15:27:44.687478 containerd[1667]: time="2025-12-16T15:27:44.687435777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-gjpms,Uid:fa623572-3c69-4396-806f-a142b1ffa21a,Namespace:calico-system,Attempt:0,}" Dec 16 15:27:44.699409 containerd[1667]: time="2025-12-16T15:27:44.699165729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b4bf85fc-bgpbx,Uid:42b32087-b938-4963-9aa0-ab40f5c370b3,Namespace:calico-apiserver,Attempt:0,}" Dec 16 15:27:44.735586 containerd[1667]: time="2025-12-16T15:27:44.735533442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75675d747f-2cjwb,Uid:71c27099-4499-4f5b-8630-5d35b5c1100b,Namespace:calico-system,Attempt:0,}" Dec 16 15:27:44.738101 containerd[1667]: time="2025-12-16T15:27:44.738049808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c8d6c5b45-zv6vg,Uid:5a7920f3-ed03-480b-921f-a7a3eaa95ad5,Namespace:calico-apiserver,Attempt:0,}" Dec 16 15:27:44.844148 containerd[1667]: time="2025-12-16T15:27:44.843844205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 15:27:45.151251 containerd[1667]: time="2025-12-16T15:27:45.151187559Z" level=error msg="Failed to destroy network for sandbox \"63671f76da42f3b200a37e20be17a11e72e6ece4b596f4bf0a5b0233399b5747\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:45.154451 containerd[1667]: time="2025-12-16T15:27:45.154409813Z" level=error msg="Failed to destroy network for sandbox \"b18f921fe56772bcadacc895697d68f7ecc8de28e9deed2f7bd1f53d43b6c510\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:45.156947 containerd[1667]: time="2025-12-16T15:27:45.156889741Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54b6d84d89-rmmv2,Uid:7b6af861-64ec-4458-a739-99f9aaa2e0d3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"63671f76da42f3b200a37e20be17a11e72e6ece4b596f4bf0a5b0233399b5747\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:45.160254 kubelet[3007]: E1216 15:27:45.157909 3007 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63671f76da42f3b200a37e20be17a11e72e6ece4b596f4bf0a5b0233399b5747\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:45.160254 kubelet[3007]: E1216 15:27:45.158034 3007 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63671f76da42f3b200a37e20be17a11e72e6ece4b596f4bf0a5b0233399b5747\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54b6d84d89-rmmv2" Dec 16 15:27:45.160254 kubelet[3007]: E1216 15:27:45.158080 3007 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63671f76da42f3b200a37e20be17a11e72e6ece4b596f4bf0a5b0233399b5747\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54b6d84d89-rmmv2" Dec 16 15:27:45.160498 kubelet[3007]: E1216 15:27:45.158163 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-54b6d84d89-rmmv2_calico-system(7b6af861-64ec-4458-a739-99f9aaa2e0d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-54b6d84d89-rmmv2_calico-system(7b6af861-64ec-4458-a739-99f9aaa2e0d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63671f76da42f3b200a37e20be17a11e72e6ece4b596f4bf0a5b0233399b5747\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54b6d84d89-rmmv2" podUID="7b6af861-64ec-4458-a739-99f9aaa2e0d3" Dec 16 15:27:45.166450 containerd[1667]: time="2025-12-16T15:27:45.166242949Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b4bf85fc-bgpbx,Uid:42b32087-b938-4963-9aa0-ab40f5c370b3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b18f921fe56772bcadacc895697d68f7ecc8de28e9deed2f7bd1f53d43b6c510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:45.181134 containerd[1667]: time="2025-12-16T15:27:45.180911957Z" level=error msg="Failed to destroy network for sandbox \"dfe4da568b71632d2730d9ed1355d3df162489233f8adc36016a86e2f00c67ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:45.181134 containerd[1667]: time="2025-12-16T15:27:45.180969632Z" level=error msg="Failed to destroy network for sandbox \"6b3196816089ec953d8467c0b6d455e592ce0a31d4f5d927ac7bc4ca270e67b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:45.181534 containerd[1667]: time="2025-12-16T15:27:45.181483567Z" level=error msg="Failed to destroy network for sandbox \"1df07531e9b5c96f2186a7a0accac7605005b1468e7bf27d1e6bca13dc8e210b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:45.185000 kubelet[3007]: E1216 15:27:45.182047 3007 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b18f921fe56772bcadacc895697d68f7ecc8de28e9deed2f7bd1f53d43b6c510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:45.185000 kubelet[3007]: E1216 15:27:45.182144 3007 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b18f921fe56772bcadacc895697d68f7ecc8de28e9deed2f7bd1f53d43b6c510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b4bf85fc-bgpbx" Dec 16 15:27:45.185000 kubelet[3007]: E1216 15:27:45.182176 3007 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b18f921fe56772bcadacc895697d68f7ecc8de28e9deed2f7bd1f53d43b6c510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b4bf85fc-bgpbx" Dec 16 15:27:45.185194 kubelet[3007]: E1216 15:27:45.182264 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b4bf85fc-bgpbx_calico-apiserver(42b32087-b938-4963-9aa0-ab40f5c370b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b4bf85fc-bgpbx_calico-apiserver(42b32087-b938-4963-9aa0-ab40f5c370b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b18f921fe56772bcadacc895697d68f7ecc8de28e9deed2f7bd1f53d43b6c510\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b4bf85fc-bgpbx" podUID="42b32087-b938-4963-9aa0-ab40f5c370b3" Dec 16 15:27:45.185462 containerd[1667]: time="2025-12-16T15:27:45.185421156Z" level=error msg="Failed to destroy network for sandbox \"8556dc3e6b6e61d852e21aa69abc277476d0590d673d565e6e2bbce6f9782a4d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:45.186049 containerd[1667]: time="2025-12-16T15:27:45.180915074Z" level=error msg="Failed to destroy network for sandbox \"a9e409438feb690d14e3cfb11df0b1ec8377be03a12647fe60c36cfaf0dbd056\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:45.186863 containerd[1667]: time="2025-12-16T15:27:45.186814510Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zkpwr,Uid:54ee0443-31bd-4488-a38f-608c67dce5d8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfe4da568b71632d2730d9ed1355d3df162489233f8adc36016a86e2f00c67ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:45.187447 kubelet[3007]: E1216 15:27:45.187380 3007 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfe4da568b71632d2730d9ed1355d3df162489233f8adc36016a86e2f00c67ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:45.187541 kubelet[3007]: E1216 15:27:45.187465 3007 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfe4da568b71632d2730d9ed1355d3df162489233f8adc36016a86e2f00c67ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zkpwr" Dec 16 15:27:45.187541 kubelet[3007]: E1216 15:27:45.187495 3007 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfe4da568b71632d2730d9ed1355d3df162489233f8adc36016a86e2f00c67ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zkpwr" Dec 16 15:27:45.187678 kubelet[3007]: E1216 15:27:45.187602 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-zkpwr_kube-system(54ee0443-31bd-4488-a38f-608c67dce5d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-zkpwr_kube-system(54ee0443-31bd-4488-a38f-608c67dce5d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dfe4da568b71632d2730d9ed1355d3df162489233f8adc36016a86e2f00c67ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-zkpwr" podUID="54ee0443-31bd-4488-a38f-608c67dce5d8" Dec 16 15:27:45.191091 containerd[1667]: time="2025-12-16T15:27:45.190954402Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75675d747f-2cjwb,Uid:71c27099-4499-4f5b-8630-5d35b5c1100b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8556dc3e6b6e61d852e21aa69abc277476d0590d673d565e6e2bbce6f9782a4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:45.191554 kubelet[3007]: E1216 15:27:45.191519 3007 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8556dc3e6b6e61d852e21aa69abc277476d0590d673d565e6e2bbce6f9782a4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:45.191724 kubelet[3007]: E1216 15:27:45.191694 3007 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8556dc3e6b6e61d852e21aa69abc277476d0590d673d565e6e2bbce6f9782a4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75675d747f-2cjwb" Dec 16 15:27:45.191916 kubelet[3007]: E1216 15:27:45.191885 3007 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8556dc3e6b6e61d852e21aa69abc277476d0590d673d565e6e2bbce6f9782a4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75675d747f-2cjwb" Dec 16 15:27:45.192129 kubelet[3007]: E1216 15:27:45.192069 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-75675d747f-2cjwb_calico-system(71c27099-4499-4f5b-8630-5d35b5c1100b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-75675d747f-2cjwb_calico-system(71c27099-4499-4f5b-8630-5d35b5c1100b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8556dc3e6b6e61d852e21aa69abc277476d0590d673d565e6e2bbce6f9782a4d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75675d747f-2cjwb" podUID="71c27099-4499-4f5b-8630-5d35b5c1100b" Dec 16 15:27:45.195168 containerd[1667]: time="2025-12-16T15:27:45.195028076Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c8d6c5b45-zv6vg,Uid:5a7920f3-ed03-480b-921f-a7a3eaa95ad5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1df07531e9b5c96f2186a7a0accac7605005b1468e7bf27d1e6bca13dc8e210b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:45.195992 kubelet[3007]: E1216 15:27:45.195957 3007 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1df07531e9b5c96f2186a7a0accac7605005b1468e7bf27d1e6bca13dc8e210b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:45.196347 kubelet[3007]: E1216 15:27:45.196235 3007 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1df07531e9b5c96f2186a7a0accac7605005b1468e7bf27d1e6bca13dc8e210b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c8d6c5b45-zv6vg" Dec 16 15:27:45.196881 containerd[1667]: time="2025-12-16T15:27:45.196409204Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-gjpms,Uid:fa623572-3c69-4396-806f-a142b1ffa21a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9e409438feb690d14e3cfb11df0b1ec8377be03a12647fe60c36cfaf0dbd056\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:45.197791 kubelet[3007]: E1216 15:27:45.197179 3007 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1df07531e9b5c96f2186a7a0accac7605005b1468e7bf27d1e6bca13dc8e210b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c8d6c5b45-zv6vg" Dec 16 15:27:45.197791 kubelet[3007]: E1216 15:27:45.197577 3007 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b3196816089ec953d8467c0b6d455e592ce0a31d4f5d927ac7bc4ca270e67b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:45.197791 kubelet[3007]: E1216 15:27:45.197618 3007 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b3196816089ec953d8467c0b6d455e592ce0a31d4f5d927ac7bc4ca270e67b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-kpx8g" Dec 16 15:27:45.197791 kubelet[3007]: E1216 15:27:45.197657 3007 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b3196816089ec953d8467c0b6d455e592ce0a31d4f5d927ac7bc4ca270e67b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-kpx8g" Dec 16 15:27:45.198059 containerd[1667]: time="2025-12-16T15:27:45.197201958Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kpx8g,Uid:8a3ce37e-c24e-49fb-956d-3bca98f84f79,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b3196816089ec953d8467c0b6d455e592ce0a31d4f5d927ac7bc4ca270e67b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:45.198145 kubelet[3007]: E1216 15:27:45.197715 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-kpx8g_kube-system(8a3ce37e-c24e-49fb-956d-3bca98f84f79)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-kpx8g_kube-system(8a3ce37e-c24e-49fb-956d-3bca98f84f79)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b3196816089ec953d8467c0b6d455e592ce0a31d4f5d927ac7bc4ca270e67b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-kpx8g" podUID="8a3ce37e-c24e-49fb-956d-3bca98f84f79" Dec 16 15:27:45.198145 kubelet[3007]: E1216 15:27:45.197773 3007 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9e409438feb690d14e3cfb11df0b1ec8377be03a12647fe60c36cfaf0dbd056\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:45.198145 kubelet[3007]: E1216 15:27:45.197818 3007 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9e409438feb690d14e3cfb11df0b1ec8377be03a12647fe60c36cfaf0dbd056\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-gjpms" Dec 16 15:27:45.198317 kubelet[3007]: E1216 15:27:45.197842 3007 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9e409438feb690d14e3cfb11df0b1ec8377be03a12647fe60c36cfaf0dbd056\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-gjpms" Dec 16 15:27:45.198317 kubelet[3007]: E1216 15:27:45.197885 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-gjpms_calico-system(fa623572-3c69-4396-806f-a142b1ffa21a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-gjpms_calico-system(fa623572-3c69-4396-806f-a142b1ffa21a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a9e409438feb690d14e3cfb11df0b1ec8377be03a12647fe60c36cfaf0dbd056\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-gjpms" podUID="fa623572-3c69-4396-806f-a142b1ffa21a" Dec 16 15:27:45.198772 kubelet[3007]: E1216 15:27:45.197670 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c8d6c5b45-zv6vg_calico-apiserver(5a7920f3-ed03-480b-921f-a7a3eaa95ad5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c8d6c5b45-zv6vg_calico-apiserver(5a7920f3-ed03-480b-921f-a7a3eaa95ad5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1df07531e9b5c96f2186a7a0accac7605005b1468e7bf27d1e6bca13dc8e210b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c8d6c5b45-zv6vg" podUID="5a7920f3-ed03-480b-921f-a7a3eaa95ad5" Dec 16 15:27:45.201785 containerd[1667]: time="2025-12-16T15:27:45.201703388Z" level=error msg="Failed to destroy network for sandbox \"1b0555c386da7df98e0c973fc65a12cc9b5a41d5b385a567562e32971d354fe6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:45.211921 containerd[1667]: time="2025-12-16T15:27:45.211748898Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b4bf85fc-lt27n,Uid:4428d9ef-7f5f-4ea4-93cb-e174a33d2ea2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b0555c386da7df98e0c973fc65a12cc9b5a41d5b385a567562e32971d354fe6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:45.212080 kubelet[3007]: E1216 15:27:45.212005 3007 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b0555c386da7df98e0c973fc65a12cc9b5a41d5b385a567562e32971d354fe6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:45.212080 kubelet[3007]: E1216 15:27:45.212064 3007 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b0555c386da7df98e0c973fc65a12cc9b5a41d5b385a567562e32971d354fe6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b4bf85fc-lt27n" Dec 16 15:27:45.212208 kubelet[3007]: E1216 15:27:45.212095 3007 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b0555c386da7df98e0c973fc65a12cc9b5a41d5b385a567562e32971d354fe6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b4bf85fc-lt27n" Dec 16 15:27:45.212208 kubelet[3007]: E1216 15:27:45.212161 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b4bf85fc-lt27n_calico-apiserver(4428d9ef-7f5f-4ea4-93cb-e174a33d2ea2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b4bf85fc-lt27n_calico-apiserver(4428d9ef-7f5f-4ea4-93cb-e174a33d2ea2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b0555c386da7df98e0c973fc65a12cc9b5a41d5b385a567562e32971d354fe6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b4bf85fc-lt27n" podUID="4428d9ef-7f5f-4ea4-93cb-e174a33d2ea2" Dec 16 15:27:45.345903 systemd[1]: Created slice kubepods-besteffort-pod27e89a24_5a1a_4b44_908b_951574a9d075.slice - libcontainer container kubepods-besteffort-pod27e89a24_5a1a_4b44_908b_951574a9d075.slice. Dec 16 15:27:45.351530 containerd[1667]: time="2025-12-16T15:27:45.351452785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7wvd4,Uid:27e89a24-5a1a-4b44-908b-951574a9d075,Namespace:calico-system,Attempt:0,}" Dec 16 15:27:45.435904 containerd[1667]: time="2025-12-16T15:27:45.435484728Z" level=error msg="Failed to destroy network for sandbox \"a4a689bdce6c10bc9c6e0b9c2230fb738d4ec62008a63f29fa82af066e549e74\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:45.441757 systemd[1]: run-netns-cni\x2de1c7065d\x2d23e1\x2d0172\x2d076d\x2db21b91bedfd2.mount: Deactivated successfully. Dec 16 15:27:45.481935 containerd[1667]: time="2025-12-16T15:27:45.481777686Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7wvd4,Uid:27e89a24-5a1a-4b44-908b-951574a9d075,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4a689bdce6c10bc9c6e0b9c2230fb738d4ec62008a63f29fa82af066e549e74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:45.485143 kubelet[3007]: E1216 15:27:45.484537 3007 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4a689bdce6c10bc9c6e0b9c2230fb738d4ec62008a63f29fa82af066e549e74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:45.485143 kubelet[3007]: E1216 15:27:45.484622 3007 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4a689bdce6c10bc9c6e0b9c2230fb738d4ec62008a63f29fa82af066e549e74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7wvd4" Dec 16 15:27:45.485143 kubelet[3007]: E1216 15:27:45.484665 3007 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4a689bdce6c10bc9c6e0b9c2230fb738d4ec62008a63f29fa82af066e549e74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7wvd4" Dec 16 15:27:45.487373 kubelet[3007]: E1216 15:27:45.484742 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7wvd4_calico-system(27e89a24-5a1a-4b44-908b-951574a9d075)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7wvd4_calico-system(27e89a24-5a1a-4b44-908b-951574a9d075)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4a689bdce6c10bc9c6e0b9c2230fb738d4ec62008a63f29fa82af066e549e74\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7wvd4" podUID="27e89a24-5a1a-4b44-908b-951574a9d075" Dec 16 15:27:56.348506 containerd[1667]: time="2025-12-16T15:27:56.348337408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b4bf85fc-bgpbx,Uid:42b32087-b938-4963-9aa0-ab40f5c370b3,Namespace:calico-apiserver,Attempt:0,}" Dec 16 15:27:56.509380 containerd[1667]: time="2025-12-16T15:27:56.509311350Z" level=error msg="Failed to destroy network for sandbox \"3802d074b4d5468b6a4eab67878ad45f8a9619d928540be735a8eea80adb29fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:56.514072 containerd[1667]: time="2025-12-16T15:27:56.512309983Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b4bf85fc-bgpbx,Uid:42b32087-b938-4963-9aa0-ab40f5c370b3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3802d074b4d5468b6a4eab67878ad45f8a9619d928540be735a8eea80adb29fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:56.514255 kubelet[3007]: E1216 15:27:56.513889 3007 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3802d074b4d5468b6a4eab67878ad45f8a9619d928540be735a8eea80adb29fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:56.514255 kubelet[3007]: E1216 15:27:56.513970 3007 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3802d074b4d5468b6a4eab67878ad45f8a9619d928540be735a8eea80adb29fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b4bf85fc-bgpbx" Dec 16 15:27:56.514255 kubelet[3007]: E1216 15:27:56.514003 3007 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3802d074b4d5468b6a4eab67878ad45f8a9619d928540be735a8eea80adb29fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b4bf85fc-bgpbx" Dec 16 15:27:56.514871 systemd[1]: run-netns-cni\x2d9c25e5f0\x2d9ccb\x2d4924\x2d40cc\x2d859f679e85c7.mount: Deactivated successfully. Dec 16 15:27:56.523014 kubelet[3007]: E1216 15:27:56.522932 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b4bf85fc-bgpbx_calico-apiserver(42b32087-b938-4963-9aa0-ab40f5c370b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b4bf85fc-bgpbx_calico-apiserver(42b32087-b938-4963-9aa0-ab40f5c370b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3802d074b4d5468b6a4eab67878ad45f8a9619d928540be735a8eea80adb29fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b4bf85fc-bgpbx" podUID="42b32087-b938-4963-9aa0-ab40f5c370b3" Dec 16 15:27:57.345918 containerd[1667]: time="2025-12-16T15:27:57.345668806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kpx8g,Uid:8a3ce37e-c24e-49fb-956d-3bca98f84f79,Namespace:kube-system,Attempt:0,}" Dec 16 15:27:57.349417 containerd[1667]: time="2025-12-16T15:27:57.349336550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75675d747f-2cjwb,Uid:71c27099-4499-4f5b-8630-5d35b5c1100b,Namespace:calico-system,Attempt:0,}" Dec 16 15:27:57.353365 containerd[1667]: time="2025-12-16T15:27:57.353308311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b4bf85fc-lt27n,Uid:4428d9ef-7f5f-4ea4-93cb-e174a33d2ea2,Namespace:calico-apiserver,Attempt:0,}" Dec 16 15:27:57.539539 containerd[1667]: time="2025-12-16T15:27:57.537775996Z" level=error msg="Failed to destroy network for sandbox \"9995ad5ba4c3ad20cd65ba0ae55b93103775e63b5a376ab0f5661955359765f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:57.541787 systemd[1]: run-netns-cni\x2d7e90c2f1\x2dd9c3\x2dab04\x2d972e\x2d48139788a7aa.mount: Deactivated successfully. Dec 16 15:27:57.545852 containerd[1667]: time="2025-12-16T15:27:57.544848139Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kpx8g,Uid:8a3ce37e-c24e-49fb-956d-3bca98f84f79,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9995ad5ba4c3ad20cd65ba0ae55b93103775e63b5a376ab0f5661955359765f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:57.545987 kubelet[3007]: E1216 15:27:57.545712 3007 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9995ad5ba4c3ad20cd65ba0ae55b93103775e63b5a376ab0f5661955359765f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:57.545987 kubelet[3007]: E1216 15:27:57.545797 3007 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9995ad5ba4c3ad20cd65ba0ae55b93103775e63b5a376ab0f5661955359765f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-kpx8g" Dec 16 15:27:57.545987 kubelet[3007]: E1216 15:27:57.545828 3007 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9995ad5ba4c3ad20cd65ba0ae55b93103775e63b5a376ab0f5661955359765f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-kpx8g" Dec 16 15:27:57.547344 kubelet[3007]: E1216 15:27:57.545913 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-kpx8g_kube-system(8a3ce37e-c24e-49fb-956d-3bca98f84f79)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-kpx8g_kube-system(8a3ce37e-c24e-49fb-956d-3bca98f84f79)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9995ad5ba4c3ad20cd65ba0ae55b93103775e63b5a376ab0f5661955359765f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-kpx8g" podUID="8a3ce37e-c24e-49fb-956d-3bca98f84f79" Dec 16 15:27:57.575478 containerd[1667]: time="2025-12-16T15:27:57.575408305Z" level=error msg="Failed to destroy network for sandbox \"8993048e8b900bad1a3c3e9fd003d7d3f104cdad795e9fd233c188db23275d7f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:57.580331 systemd[1]: run-netns-cni\x2dabe1a01b\x2d43b8\x2df942\x2df49e\x2d0132d3871393.mount: Deactivated successfully. Dec 16 15:27:57.585376 containerd[1667]: time="2025-12-16T15:27:57.585260787Z" level=error msg="Failed to destroy network for sandbox \"dc98931aecaab0ce7899066c9adc2198a6bc3be0a4585044d099541a9674e7eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:57.591867 containerd[1667]: time="2025-12-16T15:27:57.591801865Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75675d747f-2cjwb,Uid:71c27099-4499-4f5b-8630-5d35b5c1100b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8993048e8b900bad1a3c3e9fd003d7d3f104cdad795e9fd233c188db23275d7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:57.592872 kubelet[3007]: E1216 15:27:57.592820 3007 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8993048e8b900bad1a3c3e9fd003d7d3f104cdad795e9fd233c188db23275d7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:57.593126 kubelet[3007]: E1216 15:27:57.593092 3007 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8993048e8b900bad1a3c3e9fd003d7d3f104cdad795e9fd233c188db23275d7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75675d747f-2cjwb" Dec 16 15:27:57.593405 kubelet[3007]: E1216 15:27:57.593238 3007 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8993048e8b900bad1a3c3e9fd003d7d3f104cdad795e9fd233c188db23275d7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75675d747f-2cjwb" Dec 16 15:27:57.593405 kubelet[3007]: E1216 15:27:57.593343 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-75675d747f-2cjwb_calico-system(71c27099-4499-4f5b-8630-5d35b5c1100b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-75675d747f-2cjwb_calico-system(71c27099-4499-4f5b-8630-5d35b5c1100b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8993048e8b900bad1a3c3e9fd003d7d3f104cdad795e9fd233c188db23275d7f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75675d747f-2cjwb" podUID="71c27099-4499-4f5b-8630-5d35b5c1100b" Dec 16 15:27:57.602490 containerd[1667]: time="2025-12-16T15:27:57.602234066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b4bf85fc-lt27n,Uid:4428d9ef-7f5f-4ea4-93cb-e174a33d2ea2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc98931aecaab0ce7899066c9adc2198a6bc3be0a4585044d099541a9674e7eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:57.603283 kubelet[3007]: E1216 15:27:57.603072 3007 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc98931aecaab0ce7899066c9adc2198a6bc3be0a4585044d099541a9674e7eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:27:57.603908 kubelet[3007]: E1216 15:27:57.603374 3007 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc98931aecaab0ce7899066c9adc2198a6bc3be0a4585044d099541a9674e7eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b4bf85fc-lt27n" Dec 16 15:27:57.603990 kubelet[3007]: E1216 15:27:57.603907 3007 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc98931aecaab0ce7899066c9adc2198a6bc3be0a4585044d099541a9674e7eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b4bf85fc-lt27n" Dec 16 15:27:57.604497 kubelet[3007]: E1216 15:27:57.604201 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b4bf85fc-lt27n_calico-apiserver(4428d9ef-7f5f-4ea4-93cb-e174a33d2ea2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b4bf85fc-lt27n_calico-apiserver(4428d9ef-7f5f-4ea4-93cb-e174a33d2ea2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc98931aecaab0ce7899066c9adc2198a6bc3be0a4585044d099541a9674e7eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b4bf85fc-lt27n" podUID="4428d9ef-7f5f-4ea4-93cb-e174a33d2ea2" Dec 16 15:27:58.170049 containerd[1667]: time="2025-12-16T15:27:58.168429365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:27:58.199201 containerd[1667]: time="2025-12-16T15:27:58.199123646Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Dec 16 15:27:58.223177 containerd[1667]: time="2025-12-16T15:27:58.223080875Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:27:58.226980 containerd[1667]: time="2025-12-16T15:27:58.226800714Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 15:27:58.228385 containerd[1667]: time="2025-12-16T15:27:58.227496299Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 13.382679695s" Dec 16 15:27:58.228385 containerd[1667]: time="2025-12-16T15:27:58.227570158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 16 15:27:58.269363 containerd[1667]: time="2025-12-16T15:27:58.269304494Z" level=info msg="CreateContainer within sandbox \"151d76a350eadc3767b81e38d6b4e44ebfa991730600613bd5c06526630025c6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 15:27:58.329694 containerd[1667]: time="2025-12-16T15:27:58.329581757Z" level=info msg="Container 0bd965b785bb4e8a2903512ee5290a4d7c1936472d5b442e273ced198eac47f3: CDI devices from CRI Config.CDIDevices: []" Dec 16 15:27:58.363180 systemd[1]: run-netns-cni\x2d0bb2f181\x2da30d\x2d9c09\x2dd0fb\x2d7f4658383fad.mount: Deactivated successfully. Dec 16 15:27:58.363339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1723341765.mount: Deactivated successfully. Dec 16 15:27:58.377290 containerd[1667]: time="2025-12-16T15:27:58.377222582Z" level=info msg="CreateContainer within sandbox \"151d76a350eadc3767b81e38d6b4e44ebfa991730600613bd5c06526630025c6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0bd965b785bb4e8a2903512ee5290a4d7c1936472d5b442e273ced198eac47f3\"" Dec 16 15:27:58.378769 containerd[1667]: time="2025-12-16T15:27:58.378677259Z" level=info msg="StartContainer for \"0bd965b785bb4e8a2903512ee5290a4d7c1936472d5b442e273ced198eac47f3\"" Dec 16 15:27:58.389564 containerd[1667]: time="2025-12-16T15:27:58.389112422Z" level=info msg="connecting to shim 0bd965b785bb4e8a2903512ee5290a4d7c1936472d5b442e273ced198eac47f3" address="unix:///run/containerd/s/15ab7979b12ff74eeda450d0b6978db7bb83acb37b8e31567d4143868c9224bd" protocol=ttrpc version=3 Dec 16 15:27:58.523579 systemd[1]: Started cri-containerd-0bd965b785bb4e8a2903512ee5290a4d7c1936472d5b442e273ced198eac47f3.scope - libcontainer container 0bd965b785bb4e8a2903512ee5290a4d7c1936472d5b442e273ced198eac47f3. Dec 16 15:27:58.591000 audit: BPF prog-id=176 op=LOAD Dec 16 15:27:58.597896 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 16 15:27:58.598011 kernel: audit: type=1334 audit(1765898878.591:598): prog-id=176 op=LOAD Dec 16 15:27:58.591000 audit[4184]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3537 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:58.591000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062643936356237383562623465386132393033353132656535323930 Dec 16 15:27:58.612037 kernel: audit: type=1300 audit(1765898878.591:598): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3537 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:58.612129 kernel: audit: type=1327 audit(1765898878.591:598): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062643936356237383562623465386132393033353132656535323930 Dec 16 15:27:58.598000 audit: BPF prog-id=177 op=LOAD Dec 16 15:27:58.616015 kernel: audit: type=1334 audit(1765898878.598:599): prog-id=177 op=LOAD Dec 16 15:27:58.598000 audit[4184]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=3537 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:58.618636 kernel: audit: type=1300 audit(1765898878.598:599): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=3537 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:58.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062643936356237383562623465386132393033353132656535323930 Dec 16 15:27:58.598000 audit: BPF prog-id=177 op=UNLOAD Dec 16 15:27:58.631134 kernel: audit: type=1327 audit(1765898878.598:599): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062643936356237383562623465386132393033353132656535323930 Dec 16 15:27:58.631692 kernel: audit: type=1334 audit(1765898878.598:600): prog-id=177 op=UNLOAD Dec 16 15:27:58.598000 audit[4184]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3537 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:58.633765 kernel: audit: type=1300 audit(1765898878.598:600): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3537 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:58.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062643936356237383562623465386132393033353132656535323930 Dec 16 15:27:58.598000 audit: BPF prog-id=176 op=UNLOAD Dec 16 15:27:58.643753 kernel: audit: type=1327 audit(1765898878.598:600): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062643936356237383562623465386132393033353132656535323930 Dec 16 15:27:58.643846 kernel: audit: type=1334 audit(1765898878.598:601): prog-id=176 op=UNLOAD Dec 16 15:27:58.598000 audit[4184]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3537 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:58.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062643936356237383562623465386132393033353132656535323930 Dec 16 15:27:58.598000 audit: BPF prog-id=178 op=LOAD Dec 16 15:27:58.598000 audit[4184]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=3537 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:27:58.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062643936356237383562623465386132393033353132656535323930 Dec 16 15:27:58.677280 containerd[1667]: time="2025-12-16T15:27:58.677210397Z" level=info msg="StartContainer for \"0bd965b785bb4e8a2903512ee5290a4d7c1936472d5b442e273ced198eac47f3\" returns successfully" Dec 16 15:27:59.042658 kubelet[3007]: I1216 15:27:59.036794 3007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xh4xj" podStartSLOduration=2.446140001 podStartE2EDuration="33.036769535s" podCreationTimestamp="2025-12-16 15:27:26 +0000 UTC" firstStartedPulling="2025-12-16 15:27:27.639758773 +0000 UTC m=+26.649693887" lastFinishedPulling="2025-12-16 15:27:58.230388308 +0000 UTC m=+57.240323421" observedRunningTime="2025-12-16 15:27:59.031899008 +0000 UTC m=+58.041834143" watchObservedRunningTime="2025-12-16 15:27:59.036769535 +0000 UTC m=+58.046704657" Dec 16 15:27:59.354830 containerd[1667]: time="2025-12-16T15:27:59.353163219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54b6d84d89-rmmv2,Uid:7b6af861-64ec-4458-a739-99f9aaa2e0d3,Namespace:calico-system,Attempt:0,}" Dec 16 15:27:59.358331 containerd[1667]: time="2025-12-16T15:27:59.358012521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zkpwr,Uid:54ee0443-31bd-4488-a38f-608c67dce5d8,Namespace:kube-system,Attempt:0,}" Dec 16 15:27:59.358331 containerd[1667]: time="2025-12-16T15:27:59.358198280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c8d6c5b45-zv6vg,Uid:5a7920f3-ed03-480b-921f-a7a3eaa95ad5,Namespace:calico-apiserver,Attempt:0,}" Dec 16 15:27:59.358989 containerd[1667]: time="2025-12-16T15:27:59.358956220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-gjpms,Uid:fa623572-3c69-4396-806f-a142b1ffa21a,Namespace:calico-system,Attempt:0,}" Dec 16 15:27:59.408622 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 15:27:59.409165 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 15:28:00.342931 containerd[1667]: time="2025-12-16T15:28:00.342456760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7wvd4,Uid:27e89a24-5a1a-4b44-908b-951574a9d075,Namespace:calico-system,Attempt:0,}" Dec 16 15:28:00.401213 systemd-networkd[1577]: cali1a431d5cb33: Link UP Dec 16 15:28:00.401764 systemd-networkd[1577]: cali1a431d5cb33: Gained carrier Dec 16 15:28:00.417991 containerd[1667]: 2025-12-16 15:27:59.918 [INFO][4322] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e4fd7f8307b53990545df519dcd8a6523622f08d9f3e79fba3db6f9404f5e88b" Dec 16 15:28:00.417991 containerd[1667]: 2025-12-16 15:27:59.919 [INFO][4322] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e4fd7f8307b53990545df519dcd8a6523622f08d9f3e79fba3db6f9404f5e88b" iface="eth0" netns="/var/run/netns/cni-a10b6cdc-fb6d-3aac-157a-efab7cc109c9" Dec 16 15:28:00.417991 containerd[1667]: 2025-12-16 15:27:59.920 [INFO][4322] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e4fd7f8307b53990545df519dcd8a6523622f08d9f3e79fba3db6f9404f5e88b" iface="eth0" netns="/var/run/netns/cni-a10b6cdc-fb6d-3aac-157a-efab7cc109c9" Dec 16 15:28:00.417991 containerd[1667]: 2025-12-16 15:27:59.921 [INFO][4322] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e4fd7f8307b53990545df519dcd8a6523622f08d9f3e79fba3db6f9404f5e88b" iface="eth0" netns="/var/run/netns/cni-a10b6cdc-fb6d-3aac-157a-efab7cc109c9" Dec 16 15:28:00.417991 containerd[1667]: 2025-12-16 15:27:59.921 [INFO][4322] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e4fd7f8307b53990545df519dcd8a6523622f08d9f3e79fba3db6f9404f5e88b" Dec 16 15:28:00.417991 containerd[1667]: 2025-12-16 15:27:59.921 [INFO][4322] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e4fd7f8307b53990545df519dcd8a6523622f08d9f3e79fba3db6f9404f5e88b" Dec 16 15:28:00.417991 containerd[1667]: 2025-12-16 15:28:00.241 [INFO][4366] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e4fd7f8307b53990545df519dcd8a6523622f08d9f3e79fba3db6f9404f5e88b" HandleID="k8s-pod-network.e4fd7f8307b53990545df519dcd8a6523622f08d9f3e79fba3db6f9404f5e88b" Workload="srv--g2i2t.gb1.brightbox.com-k8s-goldmane--7c778bb748--gjpms-eth0" Dec 16 15:28:00.417991 containerd[1667]: 2025-12-16 15:28:00.242 [INFO][4366] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 15:28:00.417991 containerd[1667]: 2025-12-16 15:28:00.336 [INFO][4366] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 15:28:00.421155 containerd[1667]: 2025-12-16 15:28:00.358 [WARNING][4366] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e4fd7f8307b53990545df519dcd8a6523622f08d9f3e79fba3db6f9404f5e88b" HandleID="k8s-pod-network.e4fd7f8307b53990545df519dcd8a6523622f08d9f3e79fba3db6f9404f5e88b" Workload="srv--g2i2t.gb1.brightbox.com-k8s-goldmane--7c778bb748--gjpms-eth0" Dec 16 15:28:00.421155 containerd[1667]: 2025-12-16 15:28:00.358 [INFO][4366] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e4fd7f8307b53990545df519dcd8a6523622f08d9f3e79fba3db6f9404f5e88b" HandleID="k8s-pod-network.e4fd7f8307b53990545df519dcd8a6523622f08d9f3e79fba3db6f9404f5e88b" Workload="srv--g2i2t.gb1.brightbox.com-k8s-goldmane--7c778bb748--gjpms-eth0" Dec 16 15:28:00.421155 containerd[1667]: 2025-12-16 15:28:00.371 [INFO][4366] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 15:28:00.421155 containerd[1667]: 2025-12-16 15:28:00.405 [INFO][4322] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e4fd7f8307b53990545df519dcd8a6523622f08d9f3e79fba3db6f9404f5e88b" Dec 16 15:28:00.425904 systemd[1]: run-netns-cni\x2da10b6cdc\x2dfb6d\x2d3aac\x2d157a\x2defab7cc109c9.mount: Deactivated successfully. Dec 16 15:28:00.476934 containerd[1667]: time="2025-12-16T15:28:00.475744046Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-gjpms,Uid:fa623572-3c69-4396-806f-a142b1ffa21a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4fd7f8307b53990545df519dcd8a6523622f08d9f3e79fba3db6f9404f5e88b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:28:00.483984 containerd[1667]: 2025-12-16 15:27:59.866 [INFO][4326] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f43009bf475d26e5f4e7d07c5ef0452e5346326f823bd26c7c2f7b1a1e00da01" Dec 16 15:28:00.483984 containerd[1667]: 2025-12-16 15:27:59.869 [INFO][4326] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f43009bf475d26e5f4e7d07c5ef0452e5346326f823bd26c7c2f7b1a1e00da01" iface="eth0" netns="/var/run/netns/cni-98643730-140b-0fbb-c538-b95c96925520" Dec 16 15:28:00.483984 containerd[1667]: 2025-12-16 15:27:59.871 [INFO][4326] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f43009bf475d26e5f4e7d07c5ef0452e5346326f823bd26c7c2f7b1a1e00da01" iface="eth0" netns="/var/run/netns/cni-98643730-140b-0fbb-c538-b95c96925520" Dec 16 15:28:00.483984 containerd[1667]: 2025-12-16 15:27:59.873 [INFO][4326] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f43009bf475d26e5f4e7d07c5ef0452e5346326f823bd26c7c2f7b1a1e00da01" iface="eth0" netns="/var/run/netns/cni-98643730-140b-0fbb-c538-b95c96925520" Dec 16 15:28:00.483984 containerd[1667]: 2025-12-16 15:27:59.873 [INFO][4326] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f43009bf475d26e5f4e7d07c5ef0452e5346326f823bd26c7c2f7b1a1e00da01" Dec 16 15:28:00.483984 containerd[1667]: 2025-12-16 15:27:59.873 [INFO][4326] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f43009bf475d26e5f4e7d07c5ef0452e5346326f823bd26c7c2f7b1a1e00da01" Dec 16 15:28:00.483984 containerd[1667]: 2025-12-16 15:28:00.241 [INFO][4347] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f43009bf475d26e5f4e7d07c5ef0452e5346326f823bd26c7c2f7b1a1e00da01" HandleID="k8s-pod-network.f43009bf475d26e5f4e7d07c5ef0452e5346326f823bd26c7c2f7b1a1e00da01" Workload="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--7c8d6c5b45--zv6vg-eth0" Dec 16 15:28:00.483984 containerd[1667]: 2025-12-16 15:28:00.245 [INFO][4347] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 15:28:00.483984 containerd[1667]: 2025-12-16 15:28:00.370 [INFO][4347] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 15:28:00.485881 containerd[1667]: 2025-12-16 15:28:00.471 [WARNING][4347] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f43009bf475d26e5f4e7d07c5ef0452e5346326f823bd26c7c2f7b1a1e00da01" HandleID="k8s-pod-network.f43009bf475d26e5f4e7d07c5ef0452e5346326f823bd26c7c2f7b1a1e00da01" Workload="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--7c8d6c5b45--zv6vg-eth0" Dec 16 15:28:00.485881 containerd[1667]: 2025-12-16 15:28:00.471 [INFO][4347] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f43009bf475d26e5f4e7d07c5ef0452e5346326f823bd26c7c2f7b1a1e00da01" HandleID="k8s-pod-network.f43009bf475d26e5f4e7d07c5ef0452e5346326f823bd26c7c2f7b1a1e00da01" Workload="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--7c8d6c5b45--zv6vg-eth0" Dec 16 15:28:00.485881 containerd[1667]: 2025-12-16 15:28:00.475 [INFO][4347] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 15:28:00.485881 containerd[1667]: 2025-12-16 15:28:00.481 [INFO][4326] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f43009bf475d26e5f4e7d07c5ef0452e5346326f823bd26c7c2f7b1a1e00da01" Dec 16 15:28:00.490850 systemd[1]: run-netns-cni\x2d98643730\x2d140b\x2d0fbb\x2dc538\x2db95c96925520.mount: Deactivated successfully. Dec 16 15:28:00.491709 containerd[1667]: 2025-12-16 15:27:59.776 [INFO][4250] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 15:28:00.491709 containerd[1667]: 2025-12-16 15:27:59.871 [INFO][4250] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--g2i2t.gb1.brightbox.com-k8s-whisker--54b6d84d89--rmmv2-eth0 whisker-54b6d84d89- calico-system 7b6af861-64ec-4458-a739-99f9aaa2e0d3 886 0 2025-12-16 15:27:32 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:54b6d84d89 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-g2i2t.gb1.brightbox.com whisker-54b6d84d89-rmmv2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1a431d5cb33 [] [] }} ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" Namespace="calico-system" Pod="whisker-54b6d84d89-rmmv2" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-whisker--54b6d84d89--rmmv2-" Dec 16 15:28:00.491709 containerd[1667]: 2025-12-16 15:27:59.871 [INFO][4250] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" Namespace="calico-system" Pod="whisker-54b6d84d89-rmmv2" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-whisker--54b6d84d89--rmmv2-eth0" Dec 16 15:28:00.491709 containerd[1667]: 2025-12-16 15:28:00.241 [INFO][4355] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" HandleID="k8s-pod-network.8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" Workload="srv--g2i2t.gb1.brightbox.com-k8s-whisker--54b6d84d89--rmmv2-eth0" Dec 16 15:28:00.492056 containerd[1667]: 2025-12-16 15:28:00.242 [INFO][4355] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" HandleID="k8s-pod-network.8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" Workload="srv--g2i2t.gb1.brightbox.com-k8s-whisker--54b6d84d89--rmmv2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e770), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-g2i2t.gb1.brightbox.com", "pod":"whisker-54b6d84d89-rmmv2", "timestamp":"2025-12-16 15:28:00.241082993 +0000 UTC"}, Hostname:"srv-g2i2t.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 15:28:00.492056 containerd[1667]: 2025-12-16 15:28:00.243 [INFO][4355] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 15:28:00.492056 containerd[1667]: 2025-12-16 15:28:00.243 [INFO][4355] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 15:28:00.492056 containerd[1667]: 2025-12-16 15:28:00.243 [INFO][4355] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-g2i2t.gb1.brightbox.com' Dec 16 15:28:00.492056 containerd[1667]: 2025-12-16 15:28:00.261 [INFO][4355] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:00.492056 containerd[1667]: 2025-12-16 15:28:00.272 [INFO][4355] ipam/ipam.go 394: Looking up existing affinities for host host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:00.492056 containerd[1667]: 2025-12-16 15:28:00.283 [INFO][4355] ipam/ipam.go 543: Ran out of existing affine blocks for host host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:00.492056 containerd[1667]: 2025-12-16 15:28:00.286 [INFO][4355] ipam/ipam.go 560: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:00.492056 containerd[1667]: 2025-12-16 15:28:00.289 [INFO][4355] ipam/ipam_block_reader_writer.go 158: Found free block: 192.168.126.192/26 Dec 16 15:28:00.492453 containerd[1667]: 2025-12-16 15:28:00.289 [INFO][4355] ipam/ipam.go 572: Found unclaimed block host="srv-g2i2t.gb1.brightbox.com" subnet=192.168.126.192/26 Dec 16 15:28:00.492453 containerd[1667]: 2025-12-16 15:28:00.289 [INFO][4355] ipam/ipam_block_reader_writer.go 175: Trying to create affinity in pending state host="srv-g2i2t.gb1.brightbox.com" subnet=192.168.126.192/26 Dec 16 15:28:00.492453 containerd[1667]: 2025-12-16 15:28:00.293 [INFO][4355] ipam/ipam_block_reader_writer.go 205: Successfully created pending affinity for block host="srv-g2i2t.gb1.brightbox.com" subnet=192.168.126.192/26 Dec 16 15:28:00.492453 containerd[1667]: 2025-12-16 15:28:00.293 [INFO][4355] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.192/26 host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:00.492453 containerd[1667]: 2025-12-16 15:28:00.296 [INFO][4355] ipam/ipam.go 163: The referenced block doesn't exist, trying to create it cidr=192.168.126.192/26 host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:00.492453 containerd[1667]: 2025-12-16 15:28:00.299 [INFO][4355] ipam/ipam.go 170: Wrote affinity as pending cidr=192.168.126.192/26 host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:00.492453 containerd[1667]: 2025-12-16 15:28:00.303 [INFO][4355] ipam/ipam.go 179: Attempting to claim the block cidr=192.168.126.192/26 host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:00.492453 containerd[1667]: 2025-12-16 15:28:00.303 [INFO][4355] ipam/ipam_block_reader_writer.go 226: Attempting to create a new block affinityType="host" host="srv-g2i2t.gb1.brightbox.com" subnet=192.168.126.192/26 Dec 16 15:28:00.492453 containerd[1667]: 2025-12-16 15:28:00.310 [INFO][4355] ipam/ipam_block_reader_writer.go 267: Successfully created block Dec 16 15:28:00.492453 containerd[1667]: 2025-12-16 15:28:00.310 [INFO][4355] ipam/ipam_block_reader_writer.go 283: Confirming affinity host="srv-g2i2t.gb1.brightbox.com" subnet=192.168.126.192/26 Dec 16 15:28:00.492453 containerd[1667]: 2025-12-16 15:28:00.316 [INFO][4355] ipam/ipam_block_reader_writer.go 298: Successfully confirmed affinity host="srv-g2i2t.gb1.brightbox.com" subnet=192.168.126.192/26 Dec 16 15:28:00.492453 containerd[1667]: 2025-12-16 15:28:00.316 [INFO][4355] ipam/ipam.go 607: Block '192.168.126.192/26' has 64 free ips which is more than 1 ips required. host="srv-g2i2t.gb1.brightbox.com" subnet=192.168.126.192/26 Dec 16 15:28:00.496341 containerd[1667]: 2025-12-16 15:28:00.317 [INFO][4355] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.126.192/26 handle="k8s-pod-network.8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:00.496341 containerd[1667]: 2025-12-16 15:28:00.320 [INFO][4355] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8 Dec 16 15:28:00.496341 containerd[1667]: 2025-12-16 15:28:00.327 [INFO][4355] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.126.192/26 handle="k8s-pod-network.8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:00.496341 containerd[1667]: 2025-12-16 15:28:00.336 [INFO][4355] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.126.192/26] block=192.168.126.192/26 handle="k8s-pod-network.8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:00.496341 containerd[1667]: 2025-12-16 15:28:00.336 [INFO][4355] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.192/26] handle="k8s-pod-network.8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:00.496341 containerd[1667]: 2025-12-16 15:28:00.337 [INFO][4355] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 15:28:00.496341 containerd[1667]: 2025-12-16 15:28:00.337 [INFO][4355] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.126.192/26] IPv6=[] ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" HandleID="k8s-pod-network.8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" Workload="srv--g2i2t.gb1.brightbox.com-k8s-whisker--54b6d84d89--rmmv2-eth0" Dec 16 15:28:00.497240 kubelet[3007]: E1216 15:28:00.495495 3007 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4fd7f8307b53990545df519dcd8a6523622f08d9f3e79fba3db6f9404f5e88b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:28:00.497240 kubelet[3007]: E1216 15:28:00.495645 3007 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4fd7f8307b53990545df519dcd8a6523622f08d9f3e79fba3db6f9404f5e88b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-gjpms" Dec 16 15:28:00.497240 kubelet[3007]: E1216 15:28:00.495680 3007 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4fd7f8307b53990545df519dcd8a6523622f08d9f3e79fba3db6f9404f5e88b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-gjpms" Dec 16 15:28:00.499147 containerd[1667]: 2025-12-16 15:28:00.344 [INFO][4250] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" Namespace="calico-system" Pod="whisker-54b6d84d89-rmmv2" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-whisker--54b6d84d89--rmmv2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--g2i2t.gb1.brightbox.com-k8s-whisker--54b6d84d89--rmmv2-eth0", GenerateName:"whisker-54b6d84d89-", Namespace:"calico-system", SelfLink:"", UID:"7b6af861-64ec-4458-a739-99f9aaa2e0d3", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 15, 27, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54b6d84d89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-g2i2t.gb1.brightbox.com", ContainerID:"", Pod:"whisker-54b6d84d89-rmmv2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.126.192/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1a431d5cb33", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 15:28:00.499147 containerd[1667]: 2025-12-16 15:28:00.345 [INFO][4250] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.192/32] ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" Namespace="calico-system" Pod="whisker-54b6d84d89-rmmv2" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-whisker--54b6d84d89--rmmv2-eth0" Dec 16 15:28:00.499335 kubelet[3007]: E1216 15:28:00.495767 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-gjpms_calico-system(fa623572-3c69-4396-806f-a142b1ffa21a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-gjpms_calico-system(fa623572-3c69-4396-806f-a142b1ffa21a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4fd7f8307b53990545df519dcd8a6523622f08d9f3e79fba3db6f9404f5e88b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-gjpms" podUID="fa623572-3c69-4396-806f-a142b1ffa21a" Dec 16 15:28:00.499430 containerd[1667]: 2025-12-16 15:28:00.345 [INFO][4250] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a431d5cb33 ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" Namespace="calico-system" Pod="whisker-54b6d84d89-rmmv2" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-whisker--54b6d84d89--rmmv2-eth0" Dec 16 15:28:00.499430 containerd[1667]: 2025-12-16 15:28:00.398 [INFO][4250] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" Namespace="calico-system" Pod="whisker-54b6d84d89-rmmv2" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-whisker--54b6d84d89--rmmv2-eth0" Dec 16 15:28:00.501700 containerd[1667]: 2025-12-16 15:28:00.399 [INFO][4250] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" Namespace="calico-system" Pod="whisker-54b6d84d89-rmmv2" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-whisker--54b6d84d89--rmmv2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--g2i2t.gb1.brightbox.com-k8s-whisker--54b6d84d89--rmmv2-eth0", GenerateName:"whisker-54b6d84d89-", Namespace:"calico-system", SelfLink:"", UID:"7b6af861-64ec-4458-a739-99f9aaa2e0d3", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 15, 27, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54b6d84d89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-g2i2t.gb1.brightbox.com", ContainerID:"8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8", Pod:"whisker-54b6d84d89-rmmv2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.126.192/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1a431d5cb33", MAC:"be:95:52:55:29:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 15:28:00.501828 containerd[1667]: 2025-12-16 15:28:00.439 [INFO][4250] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" Namespace="calico-system" Pod="whisker-54b6d84d89-rmmv2" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-whisker--54b6d84d89--rmmv2-eth0" Dec 16 15:28:00.512287 containerd[1667]: time="2025-12-16T15:28:00.509317646Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c8d6c5b45-zv6vg,Uid:5a7920f3-ed03-480b-921f-a7a3eaa95ad5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f43009bf475d26e5f4e7d07c5ef0452e5346326f823bd26c7c2f7b1a1e00da01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:28:00.521291 kubelet[3007]: E1216 15:28:00.520409 3007 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f43009bf475d26e5f4e7d07c5ef0452e5346326f823bd26c7c2f7b1a1e00da01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:28:00.521473 kubelet[3007]: E1216 15:28:00.521357 3007 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f43009bf475d26e5f4e7d07c5ef0452e5346326f823bd26c7c2f7b1a1e00da01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c8d6c5b45-zv6vg" Dec 16 15:28:00.521473 kubelet[3007]: E1216 15:28:00.521448 3007 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f43009bf475d26e5f4e7d07c5ef0452e5346326f823bd26c7c2f7b1a1e00da01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c8d6c5b45-zv6vg" Dec 16 15:28:00.522164 kubelet[3007]: E1216 15:28:00.521593 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c8d6c5b45-zv6vg_calico-apiserver(5a7920f3-ed03-480b-921f-a7a3eaa95ad5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c8d6c5b45-zv6vg_calico-apiserver(5a7920f3-ed03-480b-921f-a7a3eaa95ad5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f43009bf475d26e5f4e7d07c5ef0452e5346326f823bd26c7c2f7b1a1e00da01\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c8d6c5b45-zv6vg" podUID="5a7920f3-ed03-480b-921f-a7a3eaa95ad5" Dec 16 15:28:00.569851 containerd[1667]: 2025-12-16 15:27:59.892 [INFO][4300] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a047171186d1325a137d9170d4a74717981ab5415f4251ae314d16cf1fb1c786" Dec 16 15:28:00.569851 containerd[1667]: 2025-12-16 15:27:59.892 [INFO][4300] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a047171186d1325a137d9170d4a74717981ab5415f4251ae314d16cf1fb1c786" iface="eth0" netns="/var/run/netns/cni-3770e307-e6d2-3bca-28ae-b5c80e3768c2" Dec 16 15:28:00.569851 containerd[1667]: 2025-12-16 15:27:59.893 [INFO][4300] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a047171186d1325a137d9170d4a74717981ab5415f4251ae314d16cf1fb1c786" iface="eth0" netns="/var/run/netns/cni-3770e307-e6d2-3bca-28ae-b5c80e3768c2" Dec 16 15:28:00.569851 containerd[1667]: 2025-12-16 15:27:59.893 [INFO][4300] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a047171186d1325a137d9170d4a74717981ab5415f4251ae314d16cf1fb1c786" iface="eth0" netns="/var/run/netns/cni-3770e307-e6d2-3bca-28ae-b5c80e3768c2" Dec 16 15:28:00.569851 containerd[1667]: 2025-12-16 15:27:59.893 [INFO][4300] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a047171186d1325a137d9170d4a74717981ab5415f4251ae314d16cf1fb1c786" Dec 16 15:28:00.569851 containerd[1667]: 2025-12-16 15:27:59.894 [INFO][4300] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a047171186d1325a137d9170d4a74717981ab5415f4251ae314d16cf1fb1c786" Dec 16 15:28:00.569851 containerd[1667]: 2025-12-16 15:28:00.240 [INFO][4353] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a047171186d1325a137d9170d4a74717981ab5415f4251ae314d16cf1fb1c786" HandleID="k8s-pod-network.a047171186d1325a137d9170d4a74717981ab5415f4251ae314d16cf1fb1c786" Workload="srv--g2i2t.gb1.brightbox.com-k8s-coredns--66bc5c9577--zkpwr-eth0" Dec 16 15:28:00.569851 containerd[1667]: 2025-12-16 15:28:00.246 [INFO][4353] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 15:28:00.569851 containerd[1667]: 2025-12-16 15:28:00.477 [INFO][4353] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 15:28:00.570349 containerd[1667]: 2025-12-16 15:28:00.526 [WARNING][4353] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a047171186d1325a137d9170d4a74717981ab5415f4251ae314d16cf1fb1c786" HandleID="k8s-pod-network.a047171186d1325a137d9170d4a74717981ab5415f4251ae314d16cf1fb1c786" Workload="srv--g2i2t.gb1.brightbox.com-k8s-coredns--66bc5c9577--zkpwr-eth0" Dec 16 15:28:00.570349 containerd[1667]: 2025-12-16 15:28:00.526 [INFO][4353] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a047171186d1325a137d9170d4a74717981ab5415f4251ae314d16cf1fb1c786" HandleID="k8s-pod-network.a047171186d1325a137d9170d4a74717981ab5415f4251ae314d16cf1fb1c786" Workload="srv--g2i2t.gb1.brightbox.com-k8s-coredns--66bc5c9577--zkpwr-eth0" Dec 16 15:28:00.570349 containerd[1667]: 2025-12-16 15:28:00.534 [INFO][4353] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 15:28:00.570349 containerd[1667]: 2025-12-16 15:28:00.553 [INFO][4300] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a047171186d1325a137d9170d4a74717981ab5415f4251ae314d16cf1fb1c786" Dec 16 15:28:00.577902 systemd[1]: run-netns-cni\x2d3770e307\x2de6d2\x2d3bca\x2d28ae\x2db5c80e3768c2.mount: Deactivated successfully. Dec 16 15:28:00.583042 containerd[1667]: time="2025-12-16T15:28:00.582799670Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zkpwr,Uid:54ee0443-31bd-4488-a38f-608c67dce5d8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a047171186d1325a137d9170d4a74717981ab5415f4251ae314d16cf1fb1c786\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:28:00.585463 kubelet[3007]: E1216 15:28:00.584674 3007 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a047171186d1325a137d9170d4a74717981ab5415f4251ae314d16cf1fb1c786\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 15:28:00.585803 kubelet[3007]: E1216 15:28:00.585456 3007 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a047171186d1325a137d9170d4a74717981ab5415f4251ae314d16cf1fb1c786\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zkpwr" Dec 16 15:28:00.585803 kubelet[3007]: E1216 15:28:00.585493 3007 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a047171186d1325a137d9170d4a74717981ab5415f4251ae314d16cf1fb1c786\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zkpwr" Dec 16 15:28:00.585803 kubelet[3007]: E1216 15:28:00.585638 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-zkpwr_kube-system(54ee0443-31bd-4488-a38f-608c67dce5d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-zkpwr_kube-system(54ee0443-31bd-4488-a38f-608c67dce5d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a047171186d1325a137d9170d4a74717981ab5415f4251ae314d16cf1fb1c786\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-zkpwr" podUID="54ee0443-31bd-4488-a38f-608c67dce5d8" Dec 16 15:28:00.784646 containerd[1667]: time="2025-12-16T15:28:00.784106659Z" level=info msg="connecting to shim 8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" address="unix:///run/containerd/s/c29fc4bd29e2f3727055a3ac34fd1271cc3555c3b7f9610be5637591c4609cfd" namespace=k8s.io protocol=ttrpc version=3 Dec 16 15:28:00.828026 systemd-networkd[1577]: calic8d7a330f0b: Link UP Dec 16 15:28:00.843571 systemd-networkd[1577]: calic8d7a330f0b: Gained carrier Dec 16 15:28:00.897987 containerd[1667]: 2025-12-16 15:28:00.544 [INFO][4405] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 15:28:00.897987 containerd[1667]: 2025-12-16 15:28:00.596 [INFO][4405] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--g2i2t.gb1.brightbox.com-k8s-csi--node--driver--7wvd4-eth0 csi-node-driver- calico-system 27e89a24-5a1a-4b44-908b-951574a9d075 764 0 2025-12-16 15:27:27 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-g2i2t.gb1.brightbox.com csi-node-driver-7wvd4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic8d7a330f0b [] [] }} ContainerID="3a367528f21fc572e104f74d5a25d78d00b17a89b56ccadf538eb443a35269dd" Namespace="calico-system" Pod="csi-node-driver-7wvd4" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-csi--node--driver--7wvd4-" Dec 16 15:28:00.897987 containerd[1667]: 2025-12-16 15:28:00.596 [INFO][4405] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3a367528f21fc572e104f74d5a25d78d00b17a89b56ccadf538eb443a35269dd" Namespace="calico-system" Pod="csi-node-driver-7wvd4" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-csi--node--driver--7wvd4-eth0" Dec 16 15:28:00.897987 containerd[1667]: 2025-12-16 15:28:00.686 [INFO][4427] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a367528f21fc572e104f74d5a25d78d00b17a89b56ccadf538eb443a35269dd" HandleID="k8s-pod-network.3a367528f21fc572e104f74d5a25d78d00b17a89b56ccadf538eb443a35269dd" Workload="srv--g2i2t.gb1.brightbox.com-k8s-csi--node--driver--7wvd4-eth0" Dec 16 15:28:00.898694 containerd[1667]: 2025-12-16 15:28:00.688 [INFO][4427] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3a367528f21fc572e104f74d5a25d78d00b17a89b56ccadf538eb443a35269dd" HandleID="k8s-pod-network.3a367528f21fc572e104f74d5a25d78d00b17a89b56ccadf538eb443a35269dd" Workload="srv--g2i2t.gb1.brightbox.com-k8s-csi--node--driver--7wvd4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103760), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-g2i2t.gb1.brightbox.com", "pod":"csi-node-driver-7wvd4", "timestamp":"2025-12-16 15:28:00.686926243 +0000 UTC"}, Hostname:"srv-g2i2t.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 15:28:00.898694 containerd[1667]: 2025-12-16 15:28:00.688 [INFO][4427] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 15:28:00.898694 containerd[1667]: 2025-12-16 15:28:00.689 [INFO][4427] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 15:28:00.898694 containerd[1667]: 2025-12-16 15:28:00.689 [INFO][4427] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-g2i2t.gb1.brightbox.com' Dec 16 15:28:00.898694 containerd[1667]: 2025-12-16 15:28:00.708 [INFO][4427] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3a367528f21fc572e104f74d5a25d78d00b17a89b56ccadf538eb443a35269dd" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:00.898694 containerd[1667]: 2025-12-16 15:28:00.723 [INFO][4427] ipam/ipam.go 394: Looking up existing affinities for host host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:00.898694 containerd[1667]: 2025-12-16 15:28:00.732 [INFO][4427] ipam/ipam.go 511: Trying affinity for 192.168.126.192/26 host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:00.898694 containerd[1667]: 2025-12-16 15:28:00.735 [INFO][4427] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.192/26 host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:00.898694 containerd[1667]: 2025-12-16 15:28:00.743 [INFO][4427] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.192/26 host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:00.899776 containerd[1667]: 2025-12-16 15:28:00.745 [INFO][4427] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.126.192/26 handle="k8s-pod-network.3a367528f21fc572e104f74d5a25d78d00b17a89b56ccadf538eb443a35269dd" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:00.899776 containerd[1667]: 2025-12-16 15:28:00.749 [INFO][4427] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3a367528f21fc572e104f74d5a25d78d00b17a89b56ccadf538eb443a35269dd Dec 16 15:28:00.899776 containerd[1667]: 2025-12-16 15:28:00.775 [INFO][4427] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.126.192/26 handle="k8s-pod-network.3a367528f21fc572e104f74d5a25d78d00b17a89b56ccadf538eb443a35269dd" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:00.899776 containerd[1667]: 2025-12-16 15:28:00.806 [INFO][4427] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.126.193/26] block=192.168.126.192/26 handle="k8s-pod-network.3a367528f21fc572e104f74d5a25d78d00b17a89b56ccadf538eb443a35269dd" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:00.899776 containerd[1667]: 2025-12-16 15:28:00.806 [INFO][4427] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.193/26] handle="k8s-pod-network.3a367528f21fc572e104f74d5a25d78d00b17a89b56ccadf538eb443a35269dd" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:00.899776 containerd[1667]: 2025-12-16 15:28:00.807 [INFO][4427] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 15:28:00.899776 containerd[1667]: 2025-12-16 15:28:00.807 [INFO][4427] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.126.193/26] IPv6=[] ContainerID="3a367528f21fc572e104f74d5a25d78d00b17a89b56ccadf538eb443a35269dd" HandleID="k8s-pod-network.3a367528f21fc572e104f74d5a25d78d00b17a89b56ccadf538eb443a35269dd" Workload="srv--g2i2t.gb1.brightbox.com-k8s-csi--node--driver--7wvd4-eth0" Dec 16 15:28:00.900113 containerd[1667]: 2025-12-16 15:28:00.815 [INFO][4405] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3a367528f21fc572e104f74d5a25d78d00b17a89b56ccadf538eb443a35269dd" Namespace="calico-system" Pod="csi-node-driver-7wvd4" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-csi--node--driver--7wvd4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--g2i2t.gb1.brightbox.com-k8s-csi--node--driver--7wvd4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"27e89a24-5a1a-4b44-908b-951574a9d075", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 15, 27, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-g2i2t.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-7wvd4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.126.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic8d7a330f0b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 15:28:00.900217 containerd[1667]: 2025-12-16 15:28:00.815 [INFO][4405] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.193/32] ContainerID="3a367528f21fc572e104f74d5a25d78d00b17a89b56ccadf538eb443a35269dd" Namespace="calico-system" Pod="csi-node-driver-7wvd4" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-csi--node--driver--7wvd4-eth0" Dec 16 15:28:00.900217 containerd[1667]: 2025-12-16 15:28:00.815 [INFO][4405] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic8d7a330f0b ContainerID="3a367528f21fc572e104f74d5a25d78d00b17a89b56ccadf538eb443a35269dd" Namespace="calico-system" Pod="csi-node-driver-7wvd4" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-csi--node--driver--7wvd4-eth0" Dec 16 15:28:00.900217 containerd[1667]: 2025-12-16 15:28:00.837 [INFO][4405] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a367528f21fc572e104f74d5a25d78d00b17a89b56ccadf538eb443a35269dd" Namespace="calico-system" Pod="csi-node-driver-7wvd4" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-csi--node--driver--7wvd4-eth0" Dec 16 15:28:00.900995 containerd[1667]: 2025-12-16 15:28:00.839 [INFO][4405] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3a367528f21fc572e104f74d5a25d78d00b17a89b56ccadf538eb443a35269dd" Namespace="calico-system" Pod="csi-node-driver-7wvd4" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-csi--node--driver--7wvd4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--g2i2t.gb1.brightbox.com-k8s-csi--node--driver--7wvd4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"27e89a24-5a1a-4b44-908b-951574a9d075", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 15, 27, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-g2i2t.gb1.brightbox.com", ContainerID:"3a367528f21fc572e104f74d5a25d78d00b17a89b56ccadf538eb443a35269dd", Pod:"csi-node-driver-7wvd4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.126.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic8d7a330f0b", MAC:"ea:f3:e2:e8:9a:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 15:28:00.901105 containerd[1667]: 2025-12-16 15:28:00.886 [INFO][4405] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3a367528f21fc572e104f74d5a25d78d00b17a89b56ccadf538eb443a35269dd" Namespace="calico-system" Pod="csi-node-driver-7wvd4" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-csi--node--driver--7wvd4-eth0" Dec 16 15:28:00.929950 systemd[1]: Started cri-containerd-8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8.scope - libcontainer container 8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8. Dec 16 15:28:00.979894 containerd[1667]: time="2025-12-16T15:28:00.979707155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c8d6c5b45-zv6vg,Uid:5a7920f3-ed03-480b-921f-a7a3eaa95ad5,Namespace:calico-apiserver,Attempt:0,}" Dec 16 15:28:00.982330 containerd[1667]: time="2025-12-16T15:28:00.981508952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zkpwr,Uid:54ee0443-31bd-4488-a38f-608c67dce5d8,Namespace:kube-system,Attempt:0,}" Dec 16 15:28:00.984608 containerd[1667]: time="2025-12-16T15:28:00.983873121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-gjpms,Uid:fa623572-3c69-4396-806f-a142b1ffa21a,Namespace:calico-system,Attempt:0,}" Dec 16 15:28:00.986665 containerd[1667]: time="2025-12-16T15:28:00.986493989Z" level=info msg="connecting to shim 3a367528f21fc572e104f74d5a25d78d00b17a89b56ccadf538eb443a35269dd" address="unix:///run/containerd/s/77bcd9d3f0d2cd725253938b2506aaa928ca9e6dc42578f2d9a20cb71019dca2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 15:28:01.007000 audit: BPF prog-id=179 op=LOAD Dec 16 15:28:01.008000 audit: BPF prog-id=180 op=LOAD Dec 16 15:28:01.008000 audit[4455]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4440 pid=4455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:01.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866646532636261613738353531353737323135656163353264626134 Dec 16 15:28:01.009000 audit: BPF prog-id=180 op=UNLOAD Dec 16 15:28:01.009000 audit[4455]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4440 pid=4455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:01.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866646532636261613738353531353737323135656163353264626134 Dec 16 15:28:01.009000 audit: BPF prog-id=181 op=LOAD Dec 16 15:28:01.009000 audit[4455]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4440 pid=4455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:01.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866646532636261613738353531353737323135656163353264626134 Dec 16 15:28:01.009000 audit: BPF prog-id=182 op=LOAD Dec 16 15:28:01.009000 audit[4455]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4440 pid=4455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:01.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866646532636261613738353531353737323135656163353264626134 Dec 16 15:28:01.009000 audit: BPF prog-id=182 op=UNLOAD Dec 16 15:28:01.009000 audit[4455]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4440 pid=4455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:01.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866646532636261613738353531353737323135656163353264626134 Dec 16 15:28:01.009000 audit: BPF prog-id=181 op=UNLOAD Dec 16 15:28:01.009000 audit[4455]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4440 pid=4455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:01.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866646532636261613738353531353737323135656163353264626134 Dec 16 15:28:01.010000 audit: BPF prog-id=183 op=LOAD Dec 16 15:28:01.010000 audit[4455]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4440 pid=4455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:01.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866646532636261613738353531353737323135656163353264626134 Dec 16 15:28:01.099786 systemd[1]: Started cri-containerd-3a367528f21fc572e104f74d5a25d78d00b17a89b56ccadf538eb443a35269dd.scope - libcontainer container 3a367528f21fc572e104f74d5a25d78d00b17a89b56ccadf538eb443a35269dd. Dec 16 15:28:01.180000 audit: BPF prog-id=184 op=LOAD Dec 16 15:28:01.181000 audit: BPF prog-id=185 op=LOAD Dec 16 15:28:01.181000 audit[4500]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4489 pid=4500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:01.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361333637353238663231666335373265313034663734643561323564 Dec 16 15:28:01.185000 audit: BPF prog-id=185 op=UNLOAD Dec 16 15:28:01.185000 audit[4500]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4489 pid=4500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:01.185000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361333637353238663231666335373265313034663734643561323564 Dec 16 15:28:01.189000 audit: BPF prog-id=186 op=LOAD Dec 16 15:28:01.189000 audit[4500]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4489 pid=4500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:01.189000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361333637353238663231666335373265313034663734643561323564 Dec 16 15:28:01.189000 audit: BPF prog-id=187 op=LOAD Dec 16 15:28:01.189000 audit[4500]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4489 pid=4500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:01.189000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361333637353238663231666335373265313034663734643561323564 Dec 16 15:28:01.189000 audit: BPF prog-id=187 op=UNLOAD Dec 16 15:28:01.189000 audit[4500]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4489 pid=4500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:01.189000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361333637353238663231666335373265313034663734643561323564 Dec 16 15:28:01.189000 audit: BPF prog-id=186 op=UNLOAD Dec 16 15:28:01.189000 audit[4500]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4489 pid=4500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:01.189000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361333637353238663231666335373265313034663734643561323564 Dec 16 15:28:01.189000 audit: BPF prog-id=188 op=LOAD Dec 16 15:28:01.189000 audit[4500]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4489 pid=4500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:01.189000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361333637353238663231666335373265313034663734643561323564 Dec 16 15:28:01.207164 containerd[1667]: time="2025-12-16T15:28:01.207113687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54b6d84d89-rmmv2,Uid:7b6af861-64ec-4458-a739-99f9aaa2e0d3,Namespace:calico-system,Attempt:0,} returns sandbox id \"8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8\"" Dec 16 15:28:01.219812 containerd[1667]: time="2025-12-16T15:28:01.219760215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 15:28:01.261737 containerd[1667]: time="2025-12-16T15:28:01.261675636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7wvd4,Uid:27e89a24-5a1a-4b44-908b-951574a9d075,Namespace:calico-system,Attempt:0,} returns sandbox id \"3a367528f21fc572e104f74d5a25d78d00b17a89b56ccadf538eb443a35269dd\"" Dec 16 15:28:01.378689 systemd-networkd[1577]: calied50875569c: Link UP Dec 16 15:28:01.379081 systemd-networkd[1577]: calied50875569c: Gained carrier Dec 16 15:28:01.401730 containerd[1667]: 2025-12-16 15:28:01.112 [INFO][4504] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 15:28:01.401730 containerd[1667]: 2025-12-16 15:28:01.156 [INFO][4504] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--7c8d6c5b45--zv6vg-eth0 calico-apiserver-7c8d6c5b45- calico-apiserver 5a7920f3-ed03-480b-921f-a7a3eaa95ad5 945 0 2025-12-16 15:27:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c8d6c5b45 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-g2i2t.gb1.brightbox.com calico-apiserver-7c8d6c5b45-zv6vg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calied50875569c [] [] }} ContainerID="c99e97ffb516e2c7b12409e61300575513769b2a1f39a812c4cd9efedd6f2060" Namespace="calico-apiserver" Pod="calico-apiserver-7c8d6c5b45-zv6vg" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--7c8d6c5b45--zv6vg-" Dec 16 15:28:01.401730 containerd[1667]: 2025-12-16 15:28:01.156 [INFO][4504] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c99e97ffb516e2c7b12409e61300575513769b2a1f39a812c4cd9efedd6f2060" Namespace="calico-apiserver" Pod="calico-apiserver-7c8d6c5b45-zv6vg" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--7c8d6c5b45--zv6vg-eth0" Dec 16 15:28:01.401730 containerd[1667]: 2025-12-16 15:28:01.300 [INFO][4571] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c99e97ffb516e2c7b12409e61300575513769b2a1f39a812c4cd9efedd6f2060" HandleID="k8s-pod-network.c99e97ffb516e2c7b12409e61300575513769b2a1f39a812c4cd9efedd6f2060" Workload="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--7c8d6c5b45--zv6vg-eth0" Dec 16 15:28:01.402753 containerd[1667]: 2025-12-16 15:28:01.300 [INFO][4571] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c99e97ffb516e2c7b12409e61300575513769b2a1f39a812c4cd9efedd6f2060" HandleID="k8s-pod-network.c99e97ffb516e2c7b12409e61300575513769b2a1f39a812c4cd9efedd6f2060" Workload="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--7c8d6c5b45--zv6vg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e290), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-g2i2t.gb1.brightbox.com", "pod":"calico-apiserver-7c8d6c5b45-zv6vg", "timestamp":"2025-12-16 15:28:01.300230131 +0000 UTC"}, Hostname:"srv-g2i2t.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 15:28:01.402753 containerd[1667]: 2025-12-16 15:28:01.300 [INFO][4571] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 15:28:01.402753 containerd[1667]: 2025-12-16 15:28:01.301 [INFO][4571] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 15:28:01.402753 containerd[1667]: 2025-12-16 15:28:01.301 [INFO][4571] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-g2i2t.gb1.brightbox.com' Dec 16 15:28:01.402753 containerd[1667]: 2025-12-16 15:28:01.318 [INFO][4571] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c99e97ffb516e2c7b12409e61300575513769b2a1f39a812c4cd9efedd6f2060" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:01.402753 containerd[1667]: 2025-12-16 15:28:01.328 [INFO][4571] ipam/ipam.go 394: Looking up existing affinities for host host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:01.402753 containerd[1667]: 2025-12-16 15:28:01.334 [INFO][4571] ipam/ipam.go 511: Trying affinity for 192.168.126.192/26 host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:01.402753 containerd[1667]: 2025-12-16 15:28:01.338 [INFO][4571] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.192/26 host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:01.402753 containerd[1667]: 2025-12-16 15:28:01.343 [INFO][4571] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.192/26 host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:01.403778 containerd[1667]: 2025-12-16 15:28:01.343 [INFO][4571] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.126.192/26 handle="k8s-pod-network.c99e97ffb516e2c7b12409e61300575513769b2a1f39a812c4cd9efedd6f2060" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:01.403778 containerd[1667]: 2025-12-16 15:28:01.348 [INFO][4571] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c99e97ffb516e2c7b12409e61300575513769b2a1f39a812c4cd9efedd6f2060 Dec 16 15:28:01.403778 containerd[1667]: 2025-12-16 15:28:01.361 [INFO][4571] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.126.192/26 handle="k8s-pod-network.c99e97ffb516e2c7b12409e61300575513769b2a1f39a812c4cd9efedd6f2060" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:01.403778 containerd[1667]: 2025-12-16 15:28:01.369 [INFO][4571] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.126.195/26] block=192.168.126.192/26 handle="k8s-pod-network.c99e97ffb516e2c7b12409e61300575513769b2a1f39a812c4cd9efedd6f2060" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:01.403778 containerd[1667]: 2025-12-16 15:28:01.369 [INFO][4571] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.195/26] handle="k8s-pod-network.c99e97ffb516e2c7b12409e61300575513769b2a1f39a812c4cd9efedd6f2060" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:01.403778 containerd[1667]: 2025-12-16 15:28:01.369 [INFO][4571] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 15:28:01.403778 containerd[1667]: 2025-12-16 15:28:01.369 [INFO][4571] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.126.195/26] IPv6=[] ContainerID="c99e97ffb516e2c7b12409e61300575513769b2a1f39a812c4cd9efedd6f2060" HandleID="k8s-pod-network.c99e97ffb516e2c7b12409e61300575513769b2a1f39a812c4cd9efedd6f2060" Workload="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--7c8d6c5b45--zv6vg-eth0" Dec 16 15:28:01.404081 containerd[1667]: 2025-12-16 15:28:01.372 [INFO][4504] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c99e97ffb516e2c7b12409e61300575513769b2a1f39a812c4cd9efedd6f2060" Namespace="calico-apiserver" Pod="calico-apiserver-7c8d6c5b45-zv6vg" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--7c8d6c5b45--zv6vg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--7c8d6c5b45--zv6vg-eth0", GenerateName:"calico-apiserver-7c8d6c5b45-", Namespace:"calico-apiserver", SelfLink:"", UID:"5a7920f3-ed03-480b-921f-a7a3eaa95ad5", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 15, 27, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c8d6c5b45", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-g2i2t.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-7c8d6c5b45-zv6vg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calied50875569c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 15:28:01.404181 containerd[1667]: 2025-12-16 15:28:01.372 [INFO][4504] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.195/32] ContainerID="c99e97ffb516e2c7b12409e61300575513769b2a1f39a812c4cd9efedd6f2060" Namespace="calico-apiserver" Pod="calico-apiserver-7c8d6c5b45-zv6vg" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--7c8d6c5b45--zv6vg-eth0" Dec 16 15:28:01.404181 containerd[1667]: 2025-12-16 15:28:01.372 [INFO][4504] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied50875569c ContainerID="c99e97ffb516e2c7b12409e61300575513769b2a1f39a812c4cd9efedd6f2060" Namespace="calico-apiserver" Pod="calico-apiserver-7c8d6c5b45-zv6vg" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--7c8d6c5b45--zv6vg-eth0" Dec 16 15:28:01.404181 containerd[1667]: 2025-12-16 15:28:01.379 [INFO][4504] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c99e97ffb516e2c7b12409e61300575513769b2a1f39a812c4cd9efedd6f2060" Namespace="calico-apiserver" Pod="calico-apiserver-7c8d6c5b45-zv6vg" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--7c8d6c5b45--zv6vg-eth0" Dec 16 15:28:01.404586 containerd[1667]: 2025-12-16 15:28:01.380 [INFO][4504] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c99e97ffb516e2c7b12409e61300575513769b2a1f39a812c4cd9efedd6f2060" Namespace="calico-apiserver" Pod="calico-apiserver-7c8d6c5b45-zv6vg" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--7c8d6c5b45--zv6vg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--7c8d6c5b45--zv6vg-eth0", GenerateName:"calico-apiserver-7c8d6c5b45-", Namespace:"calico-apiserver", SelfLink:"", UID:"5a7920f3-ed03-480b-921f-a7a3eaa95ad5", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 15, 27, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c8d6c5b45", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-g2i2t.gb1.brightbox.com", ContainerID:"c99e97ffb516e2c7b12409e61300575513769b2a1f39a812c4cd9efedd6f2060", Pod:"calico-apiserver-7c8d6c5b45-zv6vg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calied50875569c", MAC:"fe:3c:b3:aa:21:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 15:28:01.404837 containerd[1667]: 2025-12-16 15:28:01.396 [INFO][4504] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c99e97ffb516e2c7b12409e61300575513769b2a1f39a812c4cd9efedd6f2060" Namespace="calico-apiserver" Pod="calico-apiserver-7c8d6c5b45-zv6vg" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--7c8d6c5b45--zv6vg-eth0" Dec 16 15:28:01.460599 containerd[1667]: time="2025-12-16T15:28:01.459148981Z" level=info msg="connecting to shim c99e97ffb516e2c7b12409e61300575513769b2a1f39a812c4cd9efedd6f2060" address="unix:///run/containerd/s/0f63e7dc82c965166981aa3e36e581f368f4083aaf92ff55695bb90c19e9e1bf" namespace=k8s.io protocol=ttrpc version=3 Dec 16 15:28:01.526189 systemd-networkd[1577]: cali290a19ed45c: Link UP Dec 16 15:28:01.528192 systemd-networkd[1577]: cali290a19ed45c: Gained carrier Dec 16 15:28:01.569662 containerd[1667]: time="2025-12-16T15:28:01.569554827Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:28:01.571815 containerd[1667]: time="2025-12-16T15:28:01.571764123Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 15:28:01.571903 containerd[1667]: time="2025-12-16T15:28:01.571866847Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 15:28:01.572852 kubelet[3007]: E1216 15:28:01.572155 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 15:28:01.573844 kubelet[3007]: E1216 15:28:01.573546 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 15:28:01.573944 kubelet[3007]: E1216 15:28:01.573861 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-54b6d84d89-rmmv2_calico-system(7b6af861-64ec-4458-a739-99f9aaa2e0d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 15:28:01.574359 containerd[1667]: time="2025-12-16T15:28:01.574102305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 15:28:01.576855 systemd[1]: Started cri-containerd-c99e97ffb516e2c7b12409e61300575513769b2a1f39a812c4cd9efedd6f2060.scope - libcontainer container c99e97ffb516e2c7b12409e61300575513769b2a1f39a812c4cd9efedd6f2060. Dec 16 15:28:01.578926 containerd[1667]: 2025-12-16 15:28:01.191 [INFO][4513] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 15:28:01.578926 containerd[1667]: 2025-12-16 15:28:01.223 [INFO][4513] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--g2i2t.gb1.brightbox.com-k8s-coredns--66bc5c9577--zkpwr-eth0 coredns-66bc5c9577- kube-system 54ee0443-31bd-4488-a38f-608c67dce5d8 946 0 2025-12-16 15:27:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-g2i2t.gb1.brightbox.com coredns-66bc5c9577-zkpwr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali290a19ed45c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="8bfc0951a7493c7520b7ca14760e5440a9f94dd1f76f31ded182a530f1c151ff" Namespace="kube-system" Pod="coredns-66bc5c9577-zkpwr" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-coredns--66bc5c9577--zkpwr-" Dec 16 15:28:01.578926 containerd[1667]: 2025-12-16 15:28:01.223 [INFO][4513] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8bfc0951a7493c7520b7ca14760e5440a9f94dd1f76f31ded182a530f1c151ff" Namespace="kube-system" Pod="coredns-66bc5c9577-zkpwr" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-coredns--66bc5c9577--zkpwr-eth0" Dec 16 15:28:01.578926 containerd[1667]: 2025-12-16 15:28:01.321 [INFO][4587] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8bfc0951a7493c7520b7ca14760e5440a9f94dd1f76f31ded182a530f1c151ff" HandleID="k8s-pod-network.8bfc0951a7493c7520b7ca14760e5440a9f94dd1f76f31ded182a530f1c151ff" Workload="srv--g2i2t.gb1.brightbox.com-k8s-coredns--66bc5c9577--zkpwr-eth0" Dec 16 15:28:01.579782 containerd[1667]: 2025-12-16 15:28:01.322 [INFO][4587] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8bfc0951a7493c7520b7ca14760e5440a9f94dd1f76f31ded182a530f1c151ff" HandleID="k8s-pod-network.8bfc0951a7493c7520b7ca14760e5440a9f94dd1f76f31ded182a530f1c151ff" Workload="srv--g2i2t.gb1.brightbox.com-k8s-coredns--66bc5c9577--zkpwr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039aad0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-g2i2t.gb1.brightbox.com", "pod":"coredns-66bc5c9577-zkpwr", "timestamp":"2025-12-16 15:28:01.32145563 +0000 UTC"}, Hostname:"srv-g2i2t.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 15:28:01.579782 containerd[1667]: 2025-12-16 15:28:01.322 [INFO][4587] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 15:28:01.579782 containerd[1667]: 2025-12-16 15:28:01.369 [INFO][4587] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 15:28:01.579782 containerd[1667]: 2025-12-16 15:28:01.369 [INFO][4587] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-g2i2t.gb1.brightbox.com' Dec 16 15:28:01.579782 containerd[1667]: 2025-12-16 15:28:01.420 [INFO][4587] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8bfc0951a7493c7520b7ca14760e5440a9f94dd1f76f31ded182a530f1c151ff" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:01.579782 containerd[1667]: 2025-12-16 15:28:01.438 [INFO][4587] ipam/ipam.go 394: Looking up existing affinities for host host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:01.579782 containerd[1667]: 2025-12-16 15:28:01.457 [INFO][4587] ipam/ipam.go 511: Trying affinity for 192.168.126.192/26 host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:01.579782 containerd[1667]: 2025-12-16 15:28:01.466 [INFO][4587] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.192/26 host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:01.579782 containerd[1667]: 2025-12-16 15:28:01.474 [INFO][4587] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.192/26 host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:01.580182 containerd[1667]: 2025-12-16 15:28:01.474 [INFO][4587] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.126.192/26 handle="k8s-pod-network.8bfc0951a7493c7520b7ca14760e5440a9f94dd1f76f31ded182a530f1c151ff" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:01.580182 containerd[1667]: 2025-12-16 15:28:01.477 [INFO][4587] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8bfc0951a7493c7520b7ca14760e5440a9f94dd1f76f31ded182a530f1c151ff Dec 16 15:28:01.580182 containerd[1667]: 2025-12-16 15:28:01.491 [INFO][4587] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.126.192/26 handle="k8s-pod-network.8bfc0951a7493c7520b7ca14760e5440a9f94dd1f76f31ded182a530f1c151ff" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:01.580182 containerd[1667]: 2025-12-16 15:28:01.511 [INFO][4587] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.126.196/26] block=192.168.126.192/26 handle="k8s-pod-network.8bfc0951a7493c7520b7ca14760e5440a9f94dd1f76f31ded182a530f1c151ff" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:01.580182 containerd[1667]: 2025-12-16 15:28:01.511 [INFO][4587] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.196/26] handle="k8s-pod-network.8bfc0951a7493c7520b7ca14760e5440a9f94dd1f76f31ded182a530f1c151ff" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:01.580182 containerd[1667]: 2025-12-16 15:28:01.511 [INFO][4587] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 15:28:01.580182 containerd[1667]: 2025-12-16 15:28:01.511 [INFO][4587] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.126.196/26] IPv6=[] ContainerID="8bfc0951a7493c7520b7ca14760e5440a9f94dd1f76f31ded182a530f1c151ff" HandleID="k8s-pod-network.8bfc0951a7493c7520b7ca14760e5440a9f94dd1f76f31ded182a530f1c151ff" Workload="srv--g2i2t.gb1.brightbox.com-k8s-coredns--66bc5c9577--zkpwr-eth0" Dec 16 15:28:01.580462 containerd[1667]: 2025-12-16 15:28:01.516 [INFO][4513] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8bfc0951a7493c7520b7ca14760e5440a9f94dd1f76f31ded182a530f1c151ff" Namespace="kube-system" Pod="coredns-66bc5c9577-zkpwr" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-coredns--66bc5c9577--zkpwr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--g2i2t.gb1.brightbox.com-k8s-coredns--66bc5c9577--zkpwr-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"54ee0443-31bd-4488-a38f-608c67dce5d8", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 15, 27, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-g2i2t.gb1.brightbox.com", ContainerID:"", Pod:"coredns-66bc5c9577-zkpwr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali290a19ed45c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 15:28:01.580462 containerd[1667]: 2025-12-16 15:28:01.517 [INFO][4513] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.196/32] ContainerID="8bfc0951a7493c7520b7ca14760e5440a9f94dd1f76f31ded182a530f1c151ff" Namespace="kube-system" Pod="coredns-66bc5c9577-zkpwr" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-coredns--66bc5c9577--zkpwr-eth0" Dec 16 15:28:01.580462 containerd[1667]: 2025-12-16 15:28:01.517 [INFO][4513] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali290a19ed45c ContainerID="8bfc0951a7493c7520b7ca14760e5440a9f94dd1f76f31ded182a530f1c151ff" Namespace="kube-system" Pod="coredns-66bc5c9577-zkpwr" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-coredns--66bc5c9577--zkpwr-eth0" Dec 16 15:28:01.580462 containerd[1667]: 2025-12-16 15:28:01.529 [INFO][4513] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8bfc0951a7493c7520b7ca14760e5440a9f94dd1f76f31ded182a530f1c151ff" Namespace="kube-system" Pod="coredns-66bc5c9577-zkpwr" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-coredns--66bc5c9577--zkpwr-eth0" Dec 16 15:28:01.580462 containerd[1667]: 2025-12-16 15:28:01.530 [INFO][4513] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8bfc0951a7493c7520b7ca14760e5440a9f94dd1f76f31ded182a530f1c151ff" Namespace="kube-system" Pod="coredns-66bc5c9577-zkpwr" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-coredns--66bc5c9577--zkpwr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--g2i2t.gb1.brightbox.com-k8s-coredns--66bc5c9577--zkpwr-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"54ee0443-31bd-4488-a38f-608c67dce5d8", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 15, 27, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-g2i2t.gb1.brightbox.com", ContainerID:"8bfc0951a7493c7520b7ca14760e5440a9f94dd1f76f31ded182a530f1c151ff", Pod:"coredns-66bc5c9577-zkpwr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali290a19ed45c", MAC:"92:28:71:b1:bb:f7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 15:28:01.581762 containerd[1667]: 2025-12-16 15:28:01.561 [INFO][4513] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8bfc0951a7493c7520b7ca14760e5440a9f94dd1f76f31ded182a530f1c151ff" Namespace="kube-system" Pod="coredns-66bc5c9577-zkpwr" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-coredns--66bc5c9577--zkpwr-eth0" Dec 16 15:28:01.630000 audit: BPF prog-id=189 op=LOAD Dec 16 15:28:01.631000 audit: BPF prog-id=190 op=LOAD Dec 16 15:28:01.631000 audit[4629]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4617 pid=4629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:01.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339396539376666623531366532633762313234303965363133303035 Dec 16 15:28:01.633000 audit: BPF prog-id=190 op=UNLOAD Dec 16 15:28:01.633000 audit[4629]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4617 pid=4629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:01.633000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339396539376666623531366532633762313234303965363133303035 Dec 16 15:28:01.635000 audit: BPF prog-id=191 op=LOAD Dec 16 15:28:01.635000 audit[4629]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4617 pid=4629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:01.635000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339396539376666623531366532633762313234303965363133303035 Dec 16 15:28:01.635000 audit: BPF prog-id=192 op=LOAD Dec 16 15:28:01.635000 audit[4629]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4617 pid=4629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:01.635000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339396539376666623531366532633762313234303965363133303035 Dec 16 15:28:01.635000 audit: BPF prog-id=192 op=UNLOAD Dec 16 15:28:01.635000 audit[4629]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4617 pid=4629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:01.635000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339396539376666623531366532633762313234303965363133303035 Dec 16 15:28:01.635000 audit: BPF prog-id=191 op=UNLOAD Dec 16 15:28:01.635000 audit[4629]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4617 pid=4629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:01.635000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339396539376666623531366532633762313234303965363133303035 Dec 16 15:28:01.635000 audit: BPF prog-id=193 op=LOAD Dec 16 15:28:01.635000 audit[4629]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4617 pid=4629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:01.635000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339396539376666623531366532633762313234303965363133303035 Dec 16 15:28:01.663700 systemd-networkd[1577]: calibed9007cb2b: Link UP Dec 16 15:28:01.668205 systemd-networkd[1577]: calibed9007cb2b: Gained carrier Dec 16 15:28:01.676560 systemd-networkd[1577]: cali1a431d5cb33: Gained IPv6LL Dec 16 15:28:01.717331 containerd[1667]: time="2025-12-16T15:28:01.716941690Z" level=info msg="connecting to shim 8bfc0951a7493c7520b7ca14760e5440a9f94dd1f76f31ded182a530f1c151ff" address="unix:///run/containerd/s/6e96f73d222623f1b6a1ed0679ab0c14e031d6ef0785ec892db30e1116fcf010" namespace=k8s.io protocol=ttrpc version=3 Dec 16 15:28:01.719488 containerd[1667]: 2025-12-16 15:28:01.153 [INFO][4502] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 15:28:01.719488 containerd[1667]: 2025-12-16 15:28:01.215 [INFO][4502] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--g2i2t.gb1.brightbox.com-k8s-goldmane--7c778bb748--gjpms-eth0 goldmane-7c778bb748- calico-system fa623572-3c69-4396-806f-a142b1ffa21a 947 0 2025-12-16 15:27:24 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-g2i2t.gb1.brightbox.com goldmane-7c778bb748-gjpms eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calibed9007cb2b [] [] }} ContainerID="7aca887576e63caffbf81c9728eeb75193f297ef393d558e3c39c7fe9d87b377" Namespace="calico-system" Pod="goldmane-7c778bb748-gjpms" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-goldmane--7c778bb748--gjpms-" Dec 16 15:28:01.719488 containerd[1667]: 2025-12-16 15:28:01.215 [INFO][4502] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7aca887576e63caffbf81c9728eeb75193f297ef393d558e3c39c7fe9d87b377" Namespace="calico-system" Pod="goldmane-7c778bb748-gjpms" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-goldmane--7c778bb748--gjpms-eth0" Dec 16 15:28:01.719488 containerd[1667]: 2025-12-16 15:28:01.325 [INFO][4585] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7aca887576e63caffbf81c9728eeb75193f297ef393d558e3c39c7fe9d87b377" HandleID="k8s-pod-network.7aca887576e63caffbf81c9728eeb75193f297ef393d558e3c39c7fe9d87b377" Workload="srv--g2i2t.gb1.brightbox.com-k8s-goldmane--7c778bb748--gjpms-eth0" Dec 16 15:28:01.719488 containerd[1667]: 2025-12-16 15:28:01.325 [INFO][4585] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7aca887576e63caffbf81c9728eeb75193f297ef393d558e3c39c7fe9d87b377" HandleID="k8s-pod-network.7aca887576e63caffbf81c9728eeb75193f297ef393d558e3c39c7fe9d87b377" Workload="srv--g2i2t.gb1.brightbox.com-k8s-goldmane--7c778bb748--gjpms-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f950), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-g2i2t.gb1.brightbox.com", "pod":"goldmane-7c778bb748-gjpms", "timestamp":"2025-12-16 15:28:01.325444124 +0000 UTC"}, Hostname:"srv-g2i2t.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 15:28:01.719488 containerd[1667]: 2025-12-16 15:28:01.325 [INFO][4585] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 15:28:01.719488 containerd[1667]: 2025-12-16 15:28:01.511 [INFO][4585] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 15:28:01.719488 containerd[1667]: 2025-12-16 15:28:01.511 [INFO][4585] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-g2i2t.gb1.brightbox.com' Dec 16 15:28:01.719488 containerd[1667]: 2025-12-16 15:28:01.531 [INFO][4585] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7aca887576e63caffbf81c9728eeb75193f297ef393d558e3c39c7fe9d87b377" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:01.719488 containerd[1667]: 2025-12-16 15:28:01.545 [INFO][4585] ipam/ipam.go 394: Looking up existing affinities for host host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:01.719488 containerd[1667]: 2025-12-16 15:28:01.556 [INFO][4585] ipam/ipam.go 511: Trying affinity for 192.168.126.192/26 host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:01.719488 containerd[1667]: 2025-12-16 15:28:01.567 [INFO][4585] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.192/26 host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:01.719488 containerd[1667]: 2025-12-16 15:28:01.584 [INFO][4585] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.192/26 host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:01.719488 containerd[1667]: 2025-12-16 15:28:01.585 [INFO][4585] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.126.192/26 handle="k8s-pod-network.7aca887576e63caffbf81c9728eeb75193f297ef393d558e3c39c7fe9d87b377" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:01.719488 containerd[1667]: 2025-12-16 15:28:01.593 [INFO][4585] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7aca887576e63caffbf81c9728eeb75193f297ef393d558e3c39c7fe9d87b377 Dec 16 15:28:01.719488 containerd[1667]: 2025-12-16 15:28:01.608 [INFO][4585] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.126.192/26 handle="k8s-pod-network.7aca887576e63caffbf81c9728eeb75193f297ef393d558e3c39c7fe9d87b377" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:01.719488 containerd[1667]: 2025-12-16 15:28:01.629 [INFO][4585] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.126.197/26] block=192.168.126.192/26 handle="k8s-pod-network.7aca887576e63caffbf81c9728eeb75193f297ef393d558e3c39c7fe9d87b377" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:01.719488 containerd[1667]: 2025-12-16 15:28:01.629 [INFO][4585] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.197/26] handle="k8s-pod-network.7aca887576e63caffbf81c9728eeb75193f297ef393d558e3c39c7fe9d87b377" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:01.719488 containerd[1667]: 2025-12-16 15:28:01.629 [INFO][4585] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 15:28:01.719488 containerd[1667]: 2025-12-16 15:28:01.630 [INFO][4585] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.126.197/26] IPv6=[] ContainerID="7aca887576e63caffbf81c9728eeb75193f297ef393d558e3c39c7fe9d87b377" HandleID="k8s-pod-network.7aca887576e63caffbf81c9728eeb75193f297ef393d558e3c39c7fe9d87b377" Workload="srv--g2i2t.gb1.brightbox.com-k8s-goldmane--7c778bb748--gjpms-eth0" Dec 16 15:28:01.721883 containerd[1667]: 2025-12-16 15:28:01.643 [INFO][4502] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7aca887576e63caffbf81c9728eeb75193f297ef393d558e3c39c7fe9d87b377" Namespace="calico-system" Pod="goldmane-7c778bb748-gjpms" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-goldmane--7c778bb748--gjpms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--g2i2t.gb1.brightbox.com-k8s-goldmane--7c778bb748--gjpms-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"fa623572-3c69-4396-806f-a142b1ffa21a", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 15, 27, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-g2i2t.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-7c778bb748-gjpms", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.126.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibed9007cb2b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 15:28:01.721883 containerd[1667]: 2025-12-16 15:28:01.643 [INFO][4502] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.197/32] ContainerID="7aca887576e63caffbf81c9728eeb75193f297ef393d558e3c39c7fe9d87b377" Namespace="calico-system" Pod="goldmane-7c778bb748-gjpms" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-goldmane--7c778bb748--gjpms-eth0" Dec 16 15:28:01.721883 containerd[1667]: 2025-12-16 15:28:01.643 [INFO][4502] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibed9007cb2b ContainerID="7aca887576e63caffbf81c9728eeb75193f297ef393d558e3c39c7fe9d87b377" Namespace="calico-system" Pod="goldmane-7c778bb748-gjpms" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-goldmane--7c778bb748--gjpms-eth0" Dec 16 15:28:01.721883 containerd[1667]: 2025-12-16 15:28:01.672 [INFO][4502] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7aca887576e63caffbf81c9728eeb75193f297ef393d558e3c39c7fe9d87b377" Namespace="calico-system" Pod="goldmane-7c778bb748-gjpms" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-goldmane--7c778bb748--gjpms-eth0" Dec 16 15:28:01.721883 containerd[1667]: 2025-12-16 15:28:01.672 [INFO][4502] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7aca887576e63caffbf81c9728eeb75193f297ef393d558e3c39c7fe9d87b377" Namespace="calico-system" Pod="goldmane-7c778bb748-gjpms" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-goldmane--7c778bb748--gjpms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--g2i2t.gb1.brightbox.com-k8s-goldmane--7c778bb748--gjpms-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"fa623572-3c69-4396-806f-a142b1ffa21a", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 15, 27, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-g2i2t.gb1.brightbox.com", ContainerID:"7aca887576e63caffbf81c9728eeb75193f297ef393d558e3c39c7fe9d87b377", Pod:"goldmane-7c778bb748-gjpms", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.126.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibed9007cb2b", MAC:"b6:c9:8e:7b:06:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 15:28:01.721883 containerd[1667]: 2025-12-16 15:28:01.699 [INFO][4502] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7aca887576e63caffbf81c9728eeb75193f297ef393d558e3c39c7fe9d87b377" Namespace="calico-system" Pod="goldmane-7c778bb748-gjpms" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-goldmane--7c778bb748--gjpms-eth0" Dec 16 15:28:01.778309 containerd[1667]: time="2025-12-16T15:28:01.778111418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c8d6c5b45-zv6vg,Uid:5a7920f3-ed03-480b-921f-a7a3eaa95ad5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c99e97ffb516e2c7b12409e61300575513769b2a1f39a812c4cd9efedd6f2060\"" Dec 16 15:28:01.792533 containerd[1667]: time="2025-12-16T15:28:01.792461448Z" level=info msg="connecting to shim 7aca887576e63caffbf81c9728eeb75193f297ef393d558e3c39c7fe9d87b377" address="unix:///run/containerd/s/6ab4572ede56e57a1c08285b4e517c9bcf02061960e96a8c8827217b36ae5958" namespace=k8s.io protocol=ttrpc version=3 Dec 16 15:28:01.801100 systemd[1]: Started cri-containerd-8bfc0951a7493c7520b7ca14760e5440a9f94dd1f76f31ded182a530f1c151ff.scope - libcontainer container 8bfc0951a7493c7520b7ca14760e5440a9f94dd1f76f31ded182a530f1c151ff. Dec 16 15:28:01.832000 audit: BPF prog-id=194 op=LOAD Dec 16 15:28:01.833000 audit: BPF prog-id=195 op=LOAD Dec 16 15:28:01.833000 audit[4683]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4667 pid=4683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:01.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862666330393531613734393363373532306237636131343736306535 Dec 16 15:28:01.833000 audit: BPF prog-id=195 op=UNLOAD Dec 16 15:28:01.833000 audit[4683]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4667 pid=4683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:01.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862666330393531613734393363373532306237636131343736306535 Dec 16 15:28:01.833000 audit: BPF prog-id=196 op=LOAD Dec 16 15:28:01.833000 audit[4683]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4667 pid=4683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:01.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862666330393531613734393363373532306237636131343736306535 Dec 16 15:28:01.833000 audit: BPF prog-id=197 op=LOAD Dec 16 15:28:01.833000 audit[4683]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4667 pid=4683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:01.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862666330393531613734393363373532306237636131343736306535 Dec 16 15:28:01.833000 audit: BPF prog-id=197 op=UNLOAD Dec 16 15:28:01.833000 audit[4683]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4667 pid=4683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:01.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862666330393531613734393363373532306237636131343736306535 Dec 16 15:28:01.834000 audit: BPF prog-id=196 op=UNLOAD Dec 16 15:28:01.834000 audit[4683]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4667 pid=4683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:01.834000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862666330393531613734393363373532306237636131343736306535 Dec 16 15:28:01.834000 audit: BPF prog-id=198 op=LOAD Dec 16 15:28:01.834000 audit[4683]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4667 pid=4683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:01.834000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862666330393531613734393363373532306237636131343736306535 Dec 16 15:28:01.842803 systemd[1]: Started cri-containerd-7aca887576e63caffbf81c9728eeb75193f297ef393d558e3c39c7fe9d87b377.scope - libcontainer container 7aca887576e63caffbf81c9728eeb75193f297ef393d558e3c39c7fe9d87b377. Dec 16 15:28:01.875000 audit: BPF prog-id=199 op=LOAD Dec 16 15:28:01.877000 audit: BPF prog-id=200 op=LOAD Dec 16 15:28:01.877000 audit[4724]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4708 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:01.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761636138383735373665363363616666626638316339373238656562 Dec 16 15:28:01.880000 audit: BPF prog-id=200 op=UNLOAD Dec 16 15:28:01.880000 audit[4724]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4708 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:01.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761636138383735373665363363616666626638316339373238656562 Dec 16 15:28:01.883000 audit: BPF prog-id=201 op=LOAD Dec 16 15:28:01.883000 audit[4724]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4708 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:01.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761636138383735373665363363616666626638316339373238656562 Dec 16 15:28:01.883000 audit: BPF prog-id=202 op=LOAD Dec 16 15:28:01.883000 audit[4724]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4708 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:01.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761636138383735373665363363616666626638316339373238656562 Dec 16 15:28:01.885000 audit: BPF prog-id=202 op=UNLOAD Dec 16 15:28:01.885000 audit[4724]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4708 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:01.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761636138383735373665363363616666626638316339373238656562 Dec 16 15:28:01.885000 audit: BPF prog-id=201 op=UNLOAD Dec 16 15:28:01.885000 audit[4724]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4708 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:01.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761636138383735373665363363616666626638316339373238656562 Dec 16 15:28:01.886000 audit: BPF prog-id=203 op=LOAD Dec 16 15:28:01.886000 audit[4724]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4708 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:01.886000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761636138383735373665363363616666626638316339373238656562 Dec 16 15:28:01.907286 containerd[1667]: time="2025-12-16T15:28:01.907236939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zkpwr,Uid:54ee0443-31bd-4488-a38f-608c67dce5d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"8bfc0951a7493c7520b7ca14760e5440a9f94dd1f76f31ded182a530f1c151ff\"" Dec 16 15:28:01.919053 containerd[1667]: time="2025-12-16T15:28:01.918993094Z" level=info msg="CreateContainer within sandbox \"8bfc0951a7493c7520b7ca14760e5440a9f94dd1f76f31ded182a530f1c151ff\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 15:28:01.946360 containerd[1667]: time="2025-12-16T15:28:01.946302891Z" level=info msg="Container 2ddeaf0fd458cb6f4cac65ceb20ca50ad03d8965394a1e6afd2b780b182aea1b: CDI devices from CRI Config.CDIDevices: []" Dec 16 15:28:01.958692 containerd[1667]: time="2025-12-16T15:28:01.958622223Z" level=info msg="CreateContainer within sandbox \"8bfc0951a7493c7520b7ca14760e5440a9f94dd1f76f31ded182a530f1c151ff\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2ddeaf0fd458cb6f4cac65ceb20ca50ad03d8965394a1e6afd2b780b182aea1b\"" Dec 16 15:28:01.961055 containerd[1667]: time="2025-12-16T15:28:01.961018002Z" level=info msg="StartContainer for \"2ddeaf0fd458cb6f4cac65ceb20ca50ad03d8965394a1e6afd2b780b182aea1b\"" Dec 16 15:28:01.963373 containerd[1667]: time="2025-12-16T15:28:01.963335434Z" level=info msg="connecting to shim 2ddeaf0fd458cb6f4cac65ceb20ca50ad03d8965394a1e6afd2b780b182aea1b" address="unix:///run/containerd/s/6e96f73d222623f1b6a1ed0679ab0c14e031d6ef0785ec892db30e1116fcf010" protocol=ttrpc version=3 Dec 16 15:28:02.023920 containerd[1667]: time="2025-12-16T15:28:02.023863568Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:28:02.028197 containerd[1667]: time="2025-12-16T15:28:02.027797955Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 15:28:02.028197 containerd[1667]: time="2025-12-16T15:28:02.027957651Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 15:28:02.028671 kubelet[3007]: E1216 15:28:02.028440 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 15:28:02.028671 kubelet[3007]: E1216 15:28:02.028495 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 15:28:02.029396 kubelet[3007]: E1216 15:28:02.029266 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7wvd4_calico-system(27e89a24-5a1a-4b44-908b-951574a9d075): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 15:28:02.031149 containerd[1667]: time="2025-12-16T15:28:02.031114330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 15:28:02.046868 systemd[1]: Started cri-containerd-2ddeaf0fd458cb6f4cac65ceb20ca50ad03d8965394a1e6afd2b780b182aea1b.scope - libcontainer container 2ddeaf0fd458cb6f4cac65ceb20ca50ad03d8965394a1e6afd2b780b182aea1b. Dec 16 15:28:02.063047 containerd[1667]: time="2025-12-16T15:28:02.062996057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-gjpms,Uid:fa623572-3c69-4396-806f-a142b1ffa21a,Namespace:calico-system,Attempt:0,} returns sandbox id \"7aca887576e63caffbf81c9728eeb75193f297ef393d558e3c39c7fe9d87b377\"" Dec 16 15:28:02.100000 audit: BPF prog-id=204 op=LOAD Dec 16 15:28:02.102000 audit: BPF prog-id=205 op=LOAD Dec 16 15:28:02.102000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4667 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:02.102000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264646561663066643435386362366634636163363563656232306361 Dec 16 15:28:02.102000 audit: BPF prog-id=205 op=UNLOAD Dec 16 15:28:02.102000 audit[4755]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4667 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:02.102000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264646561663066643435386362366634636163363563656232306361 Dec 16 15:28:02.103000 audit: BPF prog-id=206 op=LOAD Dec 16 15:28:02.103000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4667 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:02.103000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264646561663066643435386362366634636163363563656232306361 Dec 16 15:28:02.103000 audit: BPF prog-id=207 op=LOAD Dec 16 15:28:02.103000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4667 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:02.103000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264646561663066643435386362366634636163363563656232306361 Dec 16 15:28:02.103000 audit: BPF prog-id=207 op=UNLOAD Dec 16 15:28:02.103000 audit[4755]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4667 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:02.103000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264646561663066643435386362366634636163363563656232306361 Dec 16 15:28:02.103000 audit: BPF prog-id=206 op=UNLOAD Dec 16 15:28:02.103000 audit[4755]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4667 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:02.103000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264646561663066643435386362366634636163363563656232306361 Dec 16 15:28:02.104000 audit: BPF prog-id=208 op=LOAD Dec 16 15:28:02.104000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4667 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:02.104000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264646561663066643435386362366634636163363563656232306361 Dec 16 15:28:02.163865 containerd[1667]: time="2025-12-16T15:28:02.163715631Z" level=info msg="StartContainer for \"2ddeaf0fd458cb6f4cac65ceb20ca50ad03d8965394a1e6afd2b780b182aea1b\" returns successfully" Dec 16 15:28:02.315863 systemd-networkd[1577]: calic8d7a330f0b: Gained IPv6LL Dec 16 15:28:02.375885 containerd[1667]: time="2025-12-16T15:28:02.375818793Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:28:02.377925 containerd[1667]: time="2025-12-16T15:28:02.377256689Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 15:28:02.378034 containerd[1667]: time="2025-12-16T15:28:02.377946090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 15:28:02.408819 kubelet[3007]: E1216 15:28:02.408725 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 15:28:02.409101 kubelet[3007]: E1216 15:28:02.408914 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 15:28:02.415905 kubelet[3007]: E1216 15:28:02.415762 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-54b6d84d89-rmmv2_calico-system(7b6af861-64ec-4458-a739-99f9aaa2e0d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 15:28:02.417465 kubelet[3007]: E1216 15:28:02.417407 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54b6d84d89-rmmv2" podUID="7b6af861-64ec-4458-a739-99f9aaa2e0d3" Dec 16 15:28:02.418738 containerd[1667]: time="2025-12-16T15:28:02.418665267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 15:28:02.699713 systemd-networkd[1577]: calibed9007cb2b: Gained IPv6LL Dec 16 15:28:02.742415 containerd[1667]: time="2025-12-16T15:28:02.742146883Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:28:02.746490 containerd[1667]: time="2025-12-16T15:28:02.746441622Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 15:28:02.747226 containerd[1667]: time="2025-12-16T15:28:02.747188608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 15:28:02.747675 kubelet[3007]: E1216 15:28:02.747583 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 15:28:02.748905 kubelet[3007]: E1216 15:28:02.747702 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 15:28:02.748905 kubelet[3007]: E1216 15:28:02.747884 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7c8d6c5b45-zv6vg_calico-apiserver(5a7920f3-ed03-480b-921f-a7a3eaa95ad5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 15:28:02.748905 kubelet[3007]: E1216 15:28:02.747939 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8d6c5b45-zv6vg" podUID="5a7920f3-ed03-480b-921f-a7a3eaa95ad5" Dec 16 15:28:02.749799 containerd[1667]: time="2025-12-16T15:28:02.749753462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 15:28:02.891903 systemd-networkd[1577]: cali290a19ed45c: Gained IPv6LL Dec 16 15:28:02.924000 audit: BPF prog-id=209 op=LOAD Dec 16 15:28:02.924000 audit[4915]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe12556b90 a2=98 a3=1fffffffffffffff items=0 ppid=4801 pid=4915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:02.924000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 15:28:02.924000 audit: BPF prog-id=209 op=UNLOAD Dec 16 15:28:02.924000 audit[4915]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe12556b60 a3=0 items=0 ppid=4801 pid=4915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:02.924000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 15:28:02.924000 audit: BPF prog-id=210 op=LOAD Dec 16 15:28:02.924000 audit[4915]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe12556a70 a2=94 a3=3 items=0 ppid=4801 pid=4915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:02.924000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 15:28:02.924000 audit: BPF prog-id=210 op=UNLOAD Dec 16 15:28:02.924000 audit[4915]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe12556a70 a2=94 a3=3 items=0 ppid=4801 pid=4915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:02.924000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 15:28:02.925000 audit: BPF prog-id=211 op=LOAD Dec 16 15:28:02.925000 audit[4915]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe12556ab0 a2=94 a3=7ffe12556c90 items=0 ppid=4801 pid=4915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:02.925000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 15:28:02.925000 audit: BPF prog-id=211 op=UNLOAD Dec 16 15:28:02.925000 audit[4915]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe12556ab0 a2=94 a3=7ffe12556c90 items=0 ppid=4801 pid=4915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:02.925000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 15:28:02.928000 audit: BPF prog-id=212 op=LOAD Dec 16 15:28:02.928000 audit[4916]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff8ab16af0 a2=98 a3=3 items=0 ppid=4801 pid=4916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:02.928000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 15:28:02.928000 audit: BPF prog-id=212 op=UNLOAD Dec 16 15:28:02.928000 audit[4916]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff8ab16ac0 a3=0 items=0 ppid=4801 pid=4916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:02.928000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 15:28:02.929000 audit: BPF prog-id=213 op=LOAD Dec 16 15:28:02.929000 audit[4916]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff8ab168e0 a2=94 a3=54428f items=0 ppid=4801 pid=4916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:02.929000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 15:28:02.929000 audit: BPF prog-id=213 op=UNLOAD Dec 16 15:28:02.929000 audit[4916]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff8ab168e0 a2=94 a3=54428f items=0 ppid=4801 pid=4916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:02.929000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 15:28:02.929000 audit: BPF prog-id=214 op=LOAD Dec 16 15:28:02.929000 audit[4916]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff8ab16910 a2=94 a3=2 items=0 ppid=4801 pid=4916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:02.929000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 15:28:02.929000 audit: BPF prog-id=214 op=UNLOAD Dec 16 15:28:02.929000 audit[4916]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff8ab16910 a2=0 a3=2 items=0 ppid=4801 pid=4916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:02.929000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 15:28:03.098963 containerd[1667]: time="2025-12-16T15:28:03.098729718Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:28:03.100483 containerd[1667]: time="2025-12-16T15:28:03.100452927Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 15:28:03.101116 containerd[1667]: time="2025-12-16T15:28:03.100646641Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 15:28:03.106472 kubelet[3007]: E1216 15:28:03.106076 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 15:28:03.107099 kubelet[3007]: E1216 15:28:03.106256 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8d6c5b45-zv6vg" podUID="5a7920f3-ed03-480b-921f-a7a3eaa95ad5" Dec 16 15:28:03.107099 kubelet[3007]: E1216 15:28:03.106353 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 15:28:03.108017 containerd[1667]: time="2025-12-16T15:28:03.106683309Z" level=info msg="StopPodSandbox for \"8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8\"" Dec 16 15:28:03.108530 kubelet[3007]: E1216 15:28:03.108204 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7wvd4_calico-system(27e89a24-5a1a-4b44-908b-951574a9d075): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 15:28:03.108530 kubelet[3007]: E1216 15:28:03.108268 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7wvd4" podUID="27e89a24-5a1a-4b44-908b-951574a9d075" Dec 16 15:28:03.109830 containerd[1667]: time="2025-12-16T15:28:03.109537772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 15:28:03.147883 systemd-networkd[1577]: calied50875569c: Gained IPv6LL Dec 16 15:28:03.153741 systemd[1]: cri-containerd-8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8.scope: Deactivated successfully. Dec 16 15:28:03.154232 systemd[1]: cri-containerd-8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8.scope: Consumed 58ms CPU time, 4.8M memory peak, 1.2M read from disk. Dec 16 15:28:03.156000 audit: BPF prog-id=179 op=UNLOAD Dec 16 15:28:03.156000 audit: BPF prog-id=183 op=UNLOAD Dec 16 15:28:03.186542 containerd[1667]: time="2025-12-16T15:28:03.185719794Z" level=info msg="received sandbox exit event container_id:\"8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8\" id:\"8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8\" exit_status:137 exited_at:{seconds:1765898883 nanos:169709554}" monitor_name=podsandbox Dec 16 15:28:03.209844 kubelet[3007]: I1216 15:28:03.209377 3007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zkpwr" podStartSLOduration=56.202330353 podStartE2EDuration="56.202330353s" podCreationTimestamp="2025-12-16 15:27:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 15:28:03.127262163 +0000 UTC m=+62.137197290" watchObservedRunningTime="2025-12-16 15:28:03.202330353 +0000 UTC m=+62.212265487" Dec 16 15:28:03.278322 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8-rootfs.mount: Deactivated successfully. Dec 16 15:28:03.307838 containerd[1667]: time="2025-12-16T15:28:03.307777703Z" level=info msg="shim disconnected" id=8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8 namespace=k8s.io Dec 16 15:28:03.307838 containerd[1667]: time="2025-12-16T15:28:03.307828599Z" level=info msg="cleaning up after shim disconnected" id=8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8 namespace=k8s.io Dec 16 15:28:03.328879 containerd[1667]: time="2025-12-16T15:28:03.307854727Z" level=info msg="cleaning up dead shim" id=8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8 namespace=k8s.io Dec 16 15:28:03.375000 audit[4949]: NETFILTER_CFG table=filter:119 family=2 entries=20 op=nft_register_rule pid=4949 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:28:03.375000 audit[4949]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff9d9503c0 a2=0 a3=7fff9d9503ac items=0 ppid=3127 pid=4949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.375000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:28:03.379000 audit[4949]: NETFILTER_CFG table=nat:120 family=2 entries=14 op=nft_register_rule pid=4949 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:28:03.379000 audit[4949]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff9d9503c0 a2=0 a3=0 items=0 ppid=3127 pid=4949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.379000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:28:03.401000 audit: BPF prog-id=215 op=LOAD Dec 16 15:28:03.401000 audit[4916]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff8ab167d0 a2=94 a3=1 items=0 ppid=4801 pid=4916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.401000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 15:28:03.402000 audit: BPF prog-id=215 op=UNLOAD Dec 16 15:28:03.402000 audit[4916]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff8ab167d0 a2=94 a3=1 items=0 ppid=4801 pid=4916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.402000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 15:28:03.425545 containerd[1667]: time="2025-12-16T15:28:03.416001334Z" level=info msg="received sandbox container exit event sandbox_id:\"8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8\" exit_status:137 exited_at:{seconds:1765898883 nanos:169709554}" monitor_name=criService Dec 16 15:28:03.420372 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8-shm.mount: Deactivated successfully. Dec 16 15:28:03.440000 audit: BPF prog-id=216 op=LOAD Dec 16 15:28:03.440000 audit[4916]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff8ab167c0 a2=94 a3=4 items=0 ppid=4801 pid=4916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.440000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 15:28:03.440000 audit: BPF prog-id=216 op=UNLOAD Dec 16 15:28:03.440000 audit[4916]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff8ab167c0 a2=0 a3=4 items=0 ppid=4801 pid=4916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.440000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 15:28:03.441000 audit: BPF prog-id=217 op=LOAD Dec 16 15:28:03.441000 audit[4916]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff8ab16620 a2=94 a3=5 items=0 ppid=4801 pid=4916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.441000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 15:28:03.441000 audit: BPF prog-id=217 op=UNLOAD Dec 16 15:28:03.441000 audit[4916]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff8ab16620 a2=0 a3=5 items=0 ppid=4801 pid=4916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.441000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 15:28:03.441000 audit: BPF prog-id=218 op=LOAD Dec 16 15:28:03.441000 audit[4916]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff8ab16840 a2=94 a3=6 items=0 ppid=4801 pid=4916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.441000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 15:28:03.442000 audit: BPF prog-id=218 op=UNLOAD Dec 16 15:28:03.442000 audit[4916]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff8ab16840 a2=0 a3=6 items=0 ppid=4801 pid=4916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.442000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 15:28:03.442000 audit: BPF prog-id=219 op=LOAD Dec 16 15:28:03.442000 audit[4916]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff8ab15ff0 a2=94 a3=88 items=0 ppid=4801 pid=4916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.442000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 15:28:03.442000 audit: BPF prog-id=220 op=LOAD Dec 16 15:28:03.442000 audit[4916]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff8ab15e70 a2=94 a3=2 items=0 ppid=4801 pid=4916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.442000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 15:28:03.442000 audit: BPF prog-id=220 op=UNLOAD Dec 16 15:28:03.442000 audit[4916]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff8ab15ea0 a2=0 a3=7fff8ab15fa0 items=0 ppid=4801 pid=4916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.442000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 15:28:03.444000 audit: BPF prog-id=219 op=UNLOAD Dec 16 15:28:03.444000 audit[4916]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=5dbed10 a2=0 a3=e460b2b09e585f80 items=0 ppid=4801 pid=4916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.444000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 15:28:03.474721 containerd[1667]: time="2025-12-16T15:28:03.474643295Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:28:03.477266 containerd[1667]: time="2025-12-16T15:28:03.477013346Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 15:28:03.477266 containerd[1667]: time="2025-12-16T15:28:03.477123854Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 15:28:03.479546 kubelet[3007]: E1216 15:28:03.477838 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 15:28:03.479546 kubelet[3007]: E1216 15:28:03.477910 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 15:28:03.479546 kubelet[3007]: E1216 15:28:03.478053 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-gjpms_calico-system(fa623572-3c69-4396-806f-a142b1ffa21a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 15:28:03.485900 kubelet[3007]: E1216 15:28:03.485470 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-gjpms" podUID="fa623572-3c69-4396-806f-a142b1ffa21a" Dec 16 15:28:03.493000 audit: BPF prog-id=221 op=LOAD Dec 16 15:28:03.493000 audit[4971]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcc6160590 a2=98 a3=1999999999999999 items=0 ppid=4801 pid=4971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.493000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 15:28:03.494000 audit: BPF prog-id=221 op=UNLOAD Dec 16 15:28:03.494000 audit[4971]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcc6160560 a3=0 items=0 ppid=4801 pid=4971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.494000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 15:28:03.494000 audit: BPF prog-id=222 op=LOAD Dec 16 15:28:03.494000 audit[4971]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcc6160470 a2=94 a3=ffff items=0 ppid=4801 pid=4971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.494000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 15:28:03.494000 audit: BPF prog-id=222 op=UNLOAD Dec 16 15:28:03.494000 audit[4971]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcc6160470 a2=94 a3=ffff items=0 ppid=4801 pid=4971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.494000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 15:28:03.494000 audit: BPF prog-id=223 op=LOAD Dec 16 15:28:03.494000 audit[4971]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcc61604b0 a2=94 a3=7ffcc6160690 items=0 ppid=4801 pid=4971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.494000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 15:28:03.494000 audit: BPF prog-id=223 op=UNLOAD Dec 16 15:28:03.494000 audit[4971]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcc61604b0 a2=94 a3=7ffcc6160690 items=0 ppid=4801 pid=4971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.494000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 15:28:03.546358 systemd-networkd[1577]: cali1a431d5cb33: Link DOWN Dec 16 15:28:03.546373 systemd-networkd[1577]: cali1a431d5cb33: Lost carrier Dec 16 15:28:03.710597 systemd-networkd[1577]: vxlan.calico: Link UP Dec 16 15:28:03.710611 systemd-networkd[1577]: vxlan.calico: Gained carrier Dec 16 15:28:03.760564 containerd[1667]: 2025-12-16 15:28:03.541 [INFO][4966] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" Dec 16 15:28:03.760564 containerd[1667]: 2025-12-16 15:28:03.541 [INFO][4966] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" iface="eth0" netns="/var/run/netns/cni-0ea54204-0f1a-eb48-88cb-e345863cdf4c" Dec 16 15:28:03.760564 containerd[1667]: 2025-12-16 15:28:03.543 [INFO][4966] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" iface="eth0" netns="/var/run/netns/cni-0ea54204-0f1a-eb48-88cb-e345863cdf4c" Dec 16 15:28:03.760564 containerd[1667]: 2025-12-16 15:28:03.561 [INFO][4966] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" after=19.85581ms iface="eth0" netns="/var/run/netns/cni-0ea54204-0f1a-eb48-88cb-e345863cdf4c" Dec 16 15:28:03.760564 containerd[1667]: 2025-12-16 15:28:03.561 [INFO][4966] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" Dec 16 15:28:03.760564 containerd[1667]: 2025-12-16 15:28:03.561 [INFO][4966] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" Dec 16 15:28:03.760564 containerd[1667]: 2025-12-16 15:28:03.622 [INFO][4986] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" HandleID="k8s-pod-network.8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" Workload="srv--g2i2t.gb1.brightbox.com-k8s-whisker--54b6d84d89--rmmv2-eth0" Dec 16 15:28:03.760564 containerd[1667]: 2025-12-16 15:28:03.622 [INFO][4986] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 15:28:03.760564 containerd[1667]: 2025-12-16 15:28:03.622 [INFO][4986] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 15:28:03.760564 containerd[1667]: 2025-12-16 15:28:03.727 [INFO][4986] ipam/ipam_plugin.go 455: Released address using handleID ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" HandleID="k8s-pod-network.8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" Workload="srv--g2i2t.gb1.brightbox.com-k8s-whisker--54b6d84d89--rmmv2-eth0" Dec 16 15:28:03.760564 containerd[1667]: 2025-12-16 15:28:03.727 [INFO][4986] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" HandleID="k8s-pod-network.8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" Workload="srv--g2i2t.gb1.brightbox.com-k8s-whisker--54b6d84d89--rmmv2-eth0" Dec 16 15:28:03.760564 containerd[1667]: 2025-12-16 15:28:03.732 [INFO][4986] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 15:28:03.760564 containerd[1667]: 2025-12-16 15:28:03.741 [INFO][4966] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" Dec 16 15:28:03.770563 systemd[1]: run-netns-cni\x2d0ea54204\x2d0f1a\x2deb48\x2d88cb\x2de345863cdf4c.mount: Deactivated successfully. Dec 16 15:28:03.788799 containerd[1667]: time="2025-12-16T15:28:03.785138938Z" level=info msg="TearDown network for sandbox \"8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8\" successfully" Dec 16 15:28:03.788799 containerd[1667]: time="2025-12-16T15:28:03.785251776Z" level=info msg="StopPodSandbox for \"8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8\" returns successfully" Dec 16 15:28:03.797000 audit: BPF prog-id=224 op=LOAD Dec 16 15:28:03.803312 kernel: kauditd_printk_skb: 235 callbacks suppressed Dec 16 15:28:03.804818 kernel: audit: type=1334 audit(1765898883.797:685): prog-id=224 op=LOAD Dec 16 15:28:03.797000 audit[5009]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdb5e437f0 a2=98 a3=20 items=0 ppid=4801 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.813554 kernel: audit: type=1300 audit(1765898883.797:685): arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdb5e437f0 a2=98 a3=20 items=0 ppid=4801 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.797000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 15:28:03.823842 kernel: audit: type=1327 audit(1765898883.797:685): proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 15:28:03.797000 audit: BPF prog-id=224 op=UNLOAD Dec 16 15:28:03.828658 kernel: audit: type=1334 audit(1765898883.797:686): prog-id=224 op=UNLOAD Dec 16 15:28:03.797000 audit[5009]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffdb5e437c0 a3=0 items=0 ppid=4801 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.835591 kernel: audit: type=1300 audit(1765898883.797:686): arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffdb5e437c0 a3=0 items=0 ppid=4801 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.845866 kernel: audit: type=1327 audit(1765898883.797:686): proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 15:28:03.797000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 15:28:03.853547 kernel: audit: type=1334 audit(1765898883.809:687): prog-id=225 op=LOAD Dec 16 15:28:03.809000 audit: BPF prog-id=225 op=LOAD Dec 16 15:28:03.859591 kernel: audit: type=1300 audit(1765898883.809:687): arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdb5e43600 a2=94 a3=54428f items=0 ppid=4801 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.809000 audit[5009]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdb5e43600 a2=94 a3=54428f items=0 ppid=4801 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.809000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 15:28:03.867163 kernel: audit: type=1327 audit(1765898883.809:687): proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 15:28:03.811000 audit: BPF prog-id=225 op=UNLOAD Dec 16 15:28:03.871887 kernel: audit: type=1334 audit(1765898883.811:688): prog-id=225 op=UNLOAD Dec 16 15:28:03.811000 audit[5009]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdb5e43600 a2=94 a3=54428f items=0 ppid=4801 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.811000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 15:28:03.811000 audit: BPF prog-id=226 op=LOAD Dec 16 15:28:03.811000 audit[5009]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdb5e43630 a2=94 a3=2 items=0 ppid=4801 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.811000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 15:28:03.811000 audit: BPF prog-id=226 op=UNLOAD Dec 16 15:28:03.811000 audit[5009]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdb5e43630 a2=0 a3=2 items=0 ppid=4801 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.811000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 15:28:03.811000 audit: BPF prog-id=227 op=LOAD Dec 16 15:28:03.811000 audit[5009]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdb5e433e0 a2=94 a3=4 items=0 ppid=4801 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.811000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 15:28:03.811000 audit: BPF prog-id=227 op=UNLOAD Dec 16 15:28:03.811000 audit[5009]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffdb5e433e0 a2=94 a3=4 items=0 ppid=4801 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.811000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 15:28:03.811000 audit: BPF prog-id=228 op=LOAD Dec 16 15:28:03.811000 audit[5009]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdb5e434e0 a2=94 a3=7ffdb5e43660 items=0 ppid=4801 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.811000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 15:28:03.811000 audit: BPF prog-id=228 op=UNLOAD Dec 16 15:28:03.811000 audit[5009]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffdb5e434e0 a2=0 a3=7ffdb5e43660 items=0 ppid=4801 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.811000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 15:28:03.822000 audit: BPF prog-id=229 op=LOAD Dec 16 15:28:03.822000 audit[5009]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdb5e42c10 a2=94 a3=2 items=0 ppid=4801 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.822000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 15:28:03.822000 audit: BPF prog-id=229 op=UNLOAD Dec 16 15:28:03.822000 audit[5009]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffdb5e42c10 a2=0 a3=2 items=0 ppid=4801 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.822000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 15:28:03.823000 audit: BPF prog-id=230 op=LOAD Dec 16 15:28:03.823000 audit[5009]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdb5e42d10 a2=94 a3=30 items=0 ppid=4801 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.823000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 15:28:03.845000 audit: BPF prog-id=231 op=LOAD Dec 16 15:28:03.845000 audit[5012]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd4da0df60 a2=98 a3=0 items=0 ppid=4801 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.845000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 15:28:03.845000 audit: BPF prog-id=231 op=UNLOAD Dec 16 15:28:03.845000 audit[5012]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd4da0df30 a3=0 items=0 ppid=4801 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.845000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 15:28:03.846000 audit: BPF prog-id=232 op=LOAD Dec 16 15:28:03.846000 audit[5012]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd4da0dd50 a2=94 a3=54428f items=0 ppid=4801 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.846000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 15:28:03.846000 audit: BPF prog-id=232 op=UNLOAD Dec 16 15:28:03.846000 audit[5012]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd4da0dd50 a2=94 a3=54428f items=0 ppid=4801 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.846000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 15:28:03.846000 audit: BPF prog-id=233 op=LOAD Dec 16 15:28:03.846000 audit[5012]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd4da0dd80 a2=94 a3=2 items=0 ppid=4801 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.846000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 15:28:03.846000 audit: BPF prog-id=233 op=UNLOAD Dec 16 15:28:03.846000 audit[5012]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd4da0dd80 a2=0 a3=2 items=0 ppid=4801 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:03.846000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 15:28:03.917012 kubelet[3007]: I1216 15:28:03.916966 3007 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7b6af861-64ec-4458-a739-99f9aaa2e0d3-whisker-backend-key-pair\") pod \"7b6af861-64ec-4458-a739-99f9aaa2e0d3\" (UID: \"7b6af861-64ec-4458-a739-99f9aaa2e0d3\") " Dec 16 15:28:03.918757 kubelet[3007]: I1216 15:28:03.918676 3007 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b6af861-64ec-4458-a739-99f9aaa2e0d3-whisker-ca-bundle\") pod \"7b6af861-64ec-4458-a739-99f9aaa2e0d3\" (UID: \"7b6af861-64ec-4458-a739-99f9aaa2e0d3\") " Dec 16 15:28:03.920542 kubelet[3007]: I1216 15:28:03.919334 3007 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfgrl\" (UniqueName: \"kubernetes.io/projected/7b6af861-64ec-4458-a739-99f9aaa2e0d3-kube-api-access-dfgrl\") pod \"7b6af861-64ec-4458-a739-99f9aaa2e0d3\" (UID: \"7b6af861-64ec-4458-a739-99f9aaa2e0d3\") " Dec 16 15:28:03.955794 systemd[1]: var-lib-kubelet-pods-7b6af861\x2d64ec\x2d4458\x2da739\x2d99f9aaa2e0d3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddfgrl.mount: Deactivated successfully. Dec 16 15:28:03.955945 systemd[1]: var-lib-kubelet-pods-7b6af861\x2d64ec\x2d4458\x2da739\x2d99f9aaa2e0d3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 15:28:03.966912 kubelet[3007]: I1216 15:28:03.964890 3007 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b6af861-64ec-4458-a739-99f9aaa2e0d3-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7b6af861-64ec-4458-a739-99f9aaa2e0d3" (UID: "7b6af861-64ec-4458-a739-99f9aaa2e0d3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 15:28:03.967018 kubelet[3007]: I1216 15:28:03.966991 3007 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b6af861-64ec-4458-a739-99f9aaa2e0d3-kube-api-access-dfgrl" (OuterVolumeSpecName: "kube-api-access-dfgrl") pod "7b6af861-64ec-4458-a739-99f9aaa2e0d3" (UID: "7b6af861-64ec-4458-a739-99f9aaa2e0d3"). InnerVolumeSpecName "kube-api-access-dfgrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 15:28:03.969115 kubelet[3007]: I1216 15:28:03.962344 3007 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b6af861-64ec-4458-a739-99f9aaa2e0d3-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7b6af861-64ec-4458-a739-99f9aaa2e0d3" (UID: "7b6af861-64ec-4458-a739-99f9aaa2e0d3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 15:28:04.021905 kubelet[3007]: I1216 15:28:04.021833 3007 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dfgrl\" (UniqueName: \"kubernetes.io/projected/7b6af861-64ec-4458-a739-99f9aaa2e0d3-kube-api-access-dfgrl\") on node \"srv-g2i2t.gb1.brightbox.com\" DevicePath \"\"" Dec 16 15:28:04.021905 kubelet[3007]: I1216 15:28:04.021898 3007 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7b6af861-64ec-4458-a739-99f9aaa2e0d3-whisker-backend-key-pair\") on node \"srv-g2i2t.gb1.brightbox.com\" DevicePath \"\"" Dec 16 15:28:04.021905 kubelet[3007]: I1216 15:28:04.021918 3007 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b6af861-64ec-4458-a739-99f9aaa2e0d3-whisker-ca-bundle\") on node \"srv-g2i2t.gb1.brightbox.com\" DevicePath \"\"" Dec 16 15:28:04.128574 systemd[1]: Removed slice kubepods-besteffort-pod7b6af861_64ec_4458_a739_99f9aaa2e0d3.slice - libcontainer container kubepods-besteffort-pod7b6af861_64ec_4458_a739_99f9aaa2e0d3.slice. Dec 16 15:28:04.128735 systemd[1]: kubepods-besteffort-pod7b6af861_64ec_4458_a739_99f9aaa2e0d3.slice: Consumed 58ms CPU time, 4.8M memory peak, 1.2M read from disk. Dec 16 15:28:04.136211 kubelet[3007]: E1216 15:28:04.135763 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-gjpms" podUID="fa623572-3c69-4396-806f-a142b1ffa21a" Dec 16 15:28:04.145232 kubelet[3007]: E1216 15:28:04.144961 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7wvd4" podUID="27e89a24-5a1a-4b44-908b-951574a9d075" Dec 16 15:28:04.241000 audit: BPF prog-id=234 op=LOAD Dec 16 15:28:04.241000 audit[5012]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd4da0dc40 a2=94 a3=1 items=0 ppid=4801 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:04.241000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 15:28:04.241000 audit: BPF prog-id=234 op=UNLOAD Dec 16 15:28:04.241000 audit[5012]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd4da0dc40 a2=94 a3=1 items=0 ppid=4801 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:04.241000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 15:28:04.398835 systemd[1]: Created slice kubepods-besteffort-podee0bd064_b3c2_43ac_bd37_37a79e955339.slice - libcontainer container kubepods-besteffort-podee0bd064_b3c2_43ac_bd37_37a79e955339.slice. Dec 16 15:28:04.427054 kubelet[3007]: I1216 15:28:04.426985 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee0bd064-b3c2-43ac-bd37-37a79e955339-whisker-ca-bundle\") pod \"whisker-c66d96545-p5cf6\" (UID: \"ee0bd064-b3c2-43ac-bd37-37a79e955339\") " pod="calico-system/whisker-c66d96545-p5cf6" Dec 16 15:28:04.427054 kubelet[3007]: I1216 15:28:04.427057 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsf5k\" (UniqueName: \"kubernetes.io/projected/ee0bd064-b3c2-43ac-bd37-37a79e955339-kube-api-access-zsf5k\") pod \"whisker-c66d96545-p5cf6\" (UID: \"ee0bd064-b3c2-43ac-bd37-37a79e955339\") " pod="calico-system/whisker-c66d96545-p5cf6" Dec 16 15:28:04.427339 kubelet[3007]: I1216 15:28:04.427095 3007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ee0bd064-b3c2-43ac-bd37-37a79e955339-whisker-backend-key-pair\") pod \"whisker-c66d96545-p5cf6\" (UID: \"ee0bd064-b3c2-43ac-bd37-37a79e955339\") " pod="calico-system/whisker-c66d96545-p5cf6" Dec 16 15:28:04.507000 audit: BPF prog-id=235 op=LOAD Dec 16 15:28:04.507000 audit[5012]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd4da0dc30 a2=94 a3=4 items=0 ppid=4801 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:04.507000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 15:28:04.507000 audit: BPF prog-id=235 op=UNLOAD Dec 16 15:28:04.507000 audit[5012]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd4da0dc30 a2=0 a3=4 items=0 ppid=4801 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:04.507000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 15:28:04.507000 audit: BPF prog-id=236 op=LOAD Dec 16 15:28:04.507000 audit[5012]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd4da0da90 a2=94 a3=5 items=0 ppid=4801 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:04.507000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 15:28:04.507000 audit: BPF prog-id=236 op=UNLOAD Dec 16 15:28:04.507000 audit[5012]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd4da0da90 a2=0 a3=5 items=0 ppid=4801 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:04.507000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 15:28:04.507000 audit: BPF prog-id=237 op=LOAD Dec 16 15:28:04.507000 audit[5012]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd4da0dcb0 a2=94 a3=6 items=0 ppid=4801 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:04.507000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 15:28:04.508000 audit: BPF prog-id=237 op=UNLOAD Dec 16 15:28:04.508000 audit[5012]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd4da0dcb0 a2=0 a3=6 items=0 ppid=4801 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:04.508000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 15:28:04.508000 audit: BPF prog-id=238 op=LOAD Dec 16 15:28:04.508000 audit[5012]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd4da0d460 a2=94 a3=88 items=0 ppid=4801 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:04.508000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 15:28:04.508000 audit: BPF prog-id=239 op=LOAD Dec 16 15:28:04.508000 audit[5012]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd4da0d2e0 a2=94 a3=2 items=0 ppid=4801 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:04.508000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 15:28:04.508000 audit: BPF prog-id=239 op=UNLOAD Dec 16 15:28:04.508000 audit[5012]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd4da0d310 a2=0 a3=7ffd4da0d410 items=0 ppid=4801 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:04.508000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 15:28:04.509000 audit: BPF prog-id=238 op=UNLOAD Dec 16 15:28:04.509000 audit[5012]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=1adcdd10 a2=0 a3=5a7aafe865c978d0 items=0 ppid=4801 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:04.509000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 15:28:04.512000 audit[5029]: NETFILTER_CFG table=filter:121 family=2 entries=17 op=nft_register_rule pid=5029 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:28:04.512000 audit[5029]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc3af51000 a2=0 a3=7ffc3af50fec items=0 ppid=3127 pid=5029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:04.512000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:28:04.527000 audit: BPF prog-id=230 op=UNLOAD Dec 16 15:28:04.527000 audit[4801]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000ce3f00 a2=0 a3=0 items=0 ppid=4787 pid=4801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:04.527000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 15:28:04.529000 audit[5029]: NETFILTER_CFG table=nat:122 family=2 entries=35 op=nft_register_chain pid=5029 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:28:04.529000 audit[5029]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc3af51000 a2=0 a3=7ffc3af50fec items=0 ppid=3127 pid=5029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:04.529000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:28:04.661000 audit[5058]: NETFILTER_CFG table=mangle:123 family=2 entries=16 op=nft_register_chain pid=5058 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 15:28:04.661000 audit[5058]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffcd627e1b0 a2=0 a3=7ffcd627e19c items=0 ppid=4801 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:04.661000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 15:28:04.668000 audit[5056]: NETFILTER_CFG table=raw:124 family=2 entries=21 op=nft_register_chain pid=5056 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 15:28:04.668000 audit[5056]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffc34d7f1d0 a2=0 a3=7ffc34d7f1bc items=0 ppid=4801 pid=5056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:04.668000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 15:28:04.672000 audit[5060]: NETFILTER_CFG table=nat:125 family=2 entries=15 op=nft_register_chain pid=5060 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 15:28:04.672000 audit[5060]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe33ae0930 a2=0 a3=7ffe33ae091c items=0 ppid=4801 pid=5060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:04.672000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 15:28:04.713176 containerd[1667]: time="2025-12-16T15:28:04.712181495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c66d96545-p5cf6,Uid:ee0bd064-b3c2-43ac-bd37-37a79e955339,Namespace:calico-system,Attempt:0,}" Dec 16 15:28:04.689000 audit[5063]: NETFILTER_CFG table=filter:126 family=2 entries=185 op=nft_register_chain pid=5063 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 15:28:04.689000 audit[5063]: SYSCALL arch=c000003e syscall=46 success=yes exit=105860 a0=3 a1=7ffcbc890ef0 a2=0 a3=7ffcbc890edc items=0 ppid=4801 pid=5063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:04.689000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 15:28:04.899258 systemd-networkd[1577]: cali84b13670d53: Link UP Dec 16 15:28:04.900302 systemd-networkd[1577]: cali84b13670d53: Gained carrier Dec 16 15:28:04.923494 containerd[1667]: 2025-12-16 15:28:04.796 [INFO][5069] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--g2i2t.gb1.brightbox.com-k8s-whisker--c66d96545--p5cf6-eth0 whisker-c66d96545- calico-system ee0bd064-b3c2-43ac-bd37-37a79e955339 1060 0 2025-12-16 15:28:04 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:c66d96545 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-g2i2t.gb1.brightbox.com whisker-c66d96545-p5cf6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali84b13670d53 [] [] }} ContainerID="b0be6d57e8e0a3ef9a7bb45c0f83ca1f88df1f0fd95449b6a34c5c2182a585d6" Namespace="calico-system" Pod="whisker-c66d96545-p5cf6" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-whisker--c66d96545--p5cf6-" Dec 16 15:28:04.923494 containerd[1667]: 2025-12-16 15:28:04.796 [INFO][5069] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b0be6d57e8e0a3ef9a7bb45c0f83ca1f88df1f0fd95449b6a34c5c2182a585d6" Namespace="calico-system" Pod="whisker-c66d96545-p5cf6" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-whisker--c66d96545--p5cf6-eth0" Dec 16 15:28:04.923494 containerd[1667]: 2025-12-16 15:28:04.841 [INFO][5084] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b0be6d57e8e0a3ef9a7bb45c0f83ca1f88df1f0fd95449b6a34c5c2182a585d6" HandleID="k8s-pod-network.b0be6d57e8e0a3ef9a7bb45c0f83ca1f88df1f0fd95449b6a34c5c2182a585d6" Workload="srv--g2i2t.gb1.brightbox.com-k8s-whisker--c66d96545--p5cf6-eth0" Dec 16 15:28:04.923494 containerd[1667]: 2025-12-16 15:28:04.841 [INFO][5084] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b0be6d57e8e0a3ef9a7bb45c0f83ca1f88df1f0fd95449b6a34c5c2182a585d6" HandleID="k8s-pod-network.b0be6d57e8e0a3ef9a7bb45c0f83ca1f88df1f0fd95449b6a34c5c2182a585d6" Workload="srv--g2i2t.gb1.brightbox.com-k8s-whisker--c66d96545--p5cf6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-g2i2t.gb1.brightbox.com", "pod":"whisker-c66d96545-p5cf6", "timestamp":"2025-12-16 15:28:04.841126134 +0000 UTC"}, Hostname:"srv-g2i2t.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 15:28:04.923494 containerd[1667]: 2025-12-16 15:28:04.841 [INFO][5084] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 15:28:04.923494 containerd[1667]: 2025-12-16 15:28:04.841 [INFO][5084] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 15:28:04.923494 containerd[1667]: 2025-12-16 15:28:04.841 [INFO][5084] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-g2i2t.gb1.brightbox.com' Dec 16 15:28:04.923494 containerd[1667]: 2025-12-16 15:28:04.852 [INFO][5084] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b0be6d57e8e0a3ef9a7bb45c0f83ca1f88df1f0fd95449b6a34c5c2182a585d6" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:04.923494 containerd[1667]: 2025-12-16 15:28:04.859 [INFO][5084] ipam/ipam.go 394: Looking up existing affinities for host host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:04.923494 containerd[1667]: 2025-12-16 15:28:04.865 [INFO][5084] ipam/ipam.go 511: Trying affinity for 192.168.126.192/26 host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:04.923494 containerd[1667]: 2025-12-16 15:28:04.867 [INFO][5084] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.192/26 host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:04.923494 containerd[1667]: 2025-12-16 15:28:04.871 [INFO][5084] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.192/26 host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:04.923494 containerd[1667]: 2025-12-16 15:28:04.871 [INFO][5084] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.126.192/26 handle="k8s-pod-network.b0be6d57e8e0a3ef9a7bb45c0f83ca1f88df1f0fd95449b6a34c5c2182a585d6" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:04.923494 containerd[1667]: 2025-12-16 15:28:04.874 [INFO][5084] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b0be6d57e8e0a3ef9a7bb45c0f83ca1f88df1f0fd95449b6a34c5c2182a585d6 Dec 16 15:28:04.923494 containerd[1667]: 2025-12-16 15:28:04.881 [INFO][5084] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.126.192/26 handle="k8s-pod-network.b0be6d57e8e0a3ef9a7bb45c0f83ca1f88df1f0fd95449b6a34c5c2182a585d6" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:04.923494 containerd[1667]: 2025-12-16 15:28:04.890 [INFO][5084] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.126.198/26] block=192.168.126.192/26 handle="k8s-pod-network.b0be6d57e8e0a3ef9a7bb45c0f83ca1f88df1f0fd95449b6a34c5c2182a585d6" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:04.923494 containerd[1667]: 2025-12-16 15:28:04.890 [INFO][5084] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.198/26] handle="k8s-pod-network.b0be6d57e8e0a3ef9a7bb45c0f83ca1f88df1f0fd95449b6a34c5c2182a585d6" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:04.923494 containerd[1667]: 2025-12-16 15:28:04.890 [INFO][5084] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 15:28:04.923494 containerd[1667]: 2025-12-16 15:28:04.891 [INFO][5084] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.126.198/26] IPv6=[] ContainerID="b0be6d57e8e0a3ef9a7bb45c0f83ca1f88df1f0fd95449b6a34c5c2182a585d6" HandleID="k8s-pod-network.b0be6d57e8e0a3ef9a7bb45c0f83ca1f88df1f0fd95449b6a34c5c2182a585d6" Workload="srv--g2i2t.gb1.brightbox.com-k8s-whisker--c66d96545--p5cf6-eth0" Dec 16 15:28:04.925051 containerd[1667]: 2025-12-16 15:28:04.894 [INFO][5069] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b0be6d57e8e0a3ef9a7bb45c0f83ca1f88df1f0fd95449b6a34c5c2182a585d6" Namespace="calico-system" Pod="whisker-c66d96545-p5cf6" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-whisker--c66d96545--p5cf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--g2i2t.gb1.brightbox.com-k8s-whisker--c66d96545--p5cf6-eth0", GenerateName:"whisker-c66d96545-", Namespace:"calico-system", SelfLink:"", UID:"ee0bd064-b3c2-43ac-bd37-37a79e955339", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 15, 28, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c66d96545", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-g2i2t.gb1.brightbox.com", ContainerID:"", Pod:"whisker-c66d96545-p5cf6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.126.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali84b13670d53", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 15:28:04.925051 containerd[1667]: 2025-12-16 15:28:04.894 [INFO][5069] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.198/32] ContainerID="b0be6d57e8e0a3ef9a7bb45c0f83ca1f88df1f0fd95449b6a34c5c2182a585d6" Namespace="calico-system" Pod="whisker-c66d96545-p5cf6" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-whisker--c66d96545--p5cf6-eth0" Dec 16 15:28:04.925051 containerd[1667]: 2025-12-16 15:28:04.894 [INFO][5069] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali84b13670d53 ContainerID="b0be6d57e8e0a3ef9a7bb45c0f83ca1f88df1f0fd95449b6a34c5c2182a585d6" Namespace="calico-system" Pod="whisker-c66d96545-p5cf6" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-whisker--c66d96545--p5cf6-eth0" Dec 16 15:28:04.925051 containerd[1667]: 2025-12-16 15:28:04.902 [INFO][5069] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b0be6d57e8e0a3ef9a7bb45c0f83ca1f88df1f0fd95449b6a34c5c2182a585d6" Namespace="calico-system" Pod="whisker-c66d96545-p5cf6" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-whisker--c66d96545--p5cf6-eth0" Dec 16 15:28:04.925051 containerd[1667]: 2025-12-16 15:28:04.902 [INFO][5069] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b0be6d57e8e0a3ef9a7bb45c0f83ca1f88df1f0fd95449b6a34c5c2182a585d6" Namespace="calico-system" Pod="whisker-c66d96545-p5cf6" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-whisker--c66d96545--p5cf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--g2i2t.gb1.brightbox.com-k8s-whisker--c66d96545--p5cf6-eth0", GenerateName:"whisker-c66d96545-", Namespace:"calico-system", SelfLink:"", UID:"ee0bd064-b3c2-43ac-bd37-37a79e955339", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 15, 28, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c66d96545", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-g2i2t.gb1.brightbox.com", ContainerID:"b0be6d57e8e0a3ef9a7bb45c0f83ca1f88df1f0fd95449b6a34c5c2182a585d6", Pod:"whisker-c66d96545-p5cf6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.126.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali84b13670d53", MAC:"aa:54:48:c9:b9:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 15:28:04.925051 containerd[1667]: 2025-12-16 15:28:04.918 [INFO][5069] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b0be6d57e8e0a3ef9a7bb45c0f83ca1f88df1f0fd95449b6a34c5c2182a585d6" Namespace="calico-system" Pod="whisker-c66d96545-p5cf6" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-whisker--c66d96545--p5cf6-eth0" Dec 16 15:28:04.963863 containerd[1667]: time="2025-12-16T15:28:04.963804009Z" level=info msg="connecting to shim b0be6d57e8e0a3ef9a7bb45c0f83ca1f88df1f0fd95449b6a34c5c2182a585d6" address="unix:///run/containerd/s/22a49e3568b9b998c962fe420de4ce2332959657b4ca42dbb0227f48f57bac5d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 15:28:04.981000 audit[5120]: NETFILTER_CFG table=filter:127 family=2 entries=69 op=nft_register_chain pid=5120 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 15:28:04.981000 audit[5120]: SYSCALL arch=c000003e syscall=46 success=yes exit=37636 a0=3 a1=7ffcbef1dc20 a2=0 a3=7ffcbef1dc0c items=0 ppid=4801 pid=5120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:04.981000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 15:28:05.018795 systemd[1]: Started cri-containerd-b0be6d57e8e0a3ef9a7bb45c0f83ca1f88df1f0fd95449b6a34c5c2182a585d6.scope - libcontainer container b0be6d57e8e0a3ef9a7bb45c0f83ca1f88df1f0fd95449b6a34c5c2182a585d6. Dec 16 15:28:05.048000 audit: BPF prog-id=240 op=LOAD Dec 16 15:28:05.049000 audit: BPF prog-id=241 op=LOAD Dec 16 15:28:05.049000 audit[5122]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5109 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:05.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230626536643537653865306133656639613762623435633066383363 Dec 16 15:28:05.050000 audit: BPF prog-id=241 op=UNLOAD Dec 16 15:28:05.050000 audit[5122]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5109 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:05.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230626536643537653865306133656639613762623435633066383363 Dec 16 15:28:05.050000 audit: BPF prog-id=242 op=LOAD Dec 16 15:28:05.050000 audit[5122]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5109 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:05.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230626536643537653865306133656639613762623435633066383363 Dec 16 15:28:05.050000 audit: BPF prog-id=243 op=LOAD Dec 16 15:28:05.050000 audit[5122]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5109 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:05.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230626536643537653865306133656639613762623435633066383363 Dec 16 15:28:05.050000 audit: BPF prog-id=243 op=UNLOAD Dec 16 15:28:05.050000 audit[5122]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5109 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:05.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230626536643537653865306133656639613762623435633066383363 Dec 16 15:28:05.050000 audit: BPF prog-id=242 op=UNLOAD Dec 16 15:28:05.050000 audit[5122]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5109 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:05.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230626536643537653865306133656639613762623435633066383363 Dec 16 15:28:05.050000 audit: BPF prog-id=244 op=LOAD Dec 16 15:28:05.050000 audit[5122]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5109 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:05.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230626536643537653865306133656639613762623435633066383363 Dec 16 15:28:05.122102 containerd[1667]: time="2025-12-16T15:28:05.121970864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c66d96545-p5cf6,Uid:ee0bd064-b3c2-43ac-bd37-37a79e955339,Namespace:calico-system,Attempt:0,} returns sandbox id \"b0be6d57e8e0a3ef9a7bb45c0f83ca1f88df1f0fd95449b6a34c5c2182a585d6\"" Dec 16 15:28:05.124790 containerd[1667]: time="2025-12-16T15:28:05.124733159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 15:28:05.341559 kubelet[3007]: I1216 15:28:05.341214 3007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b6af861-64ec-4458-a739-99f9aaa2e0d3" path="/var/lib/kubelet/pods/7b6af861-64ec-4458-a739-99f9aaa2e0d3/volumes" Dec 16 15:28:05.437462 containerd[1667]: time="2025-12-16T15:28:05.437370094Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:28:05.438736 containerd[1667]: time="2025-12-16T15:28:05.438676530Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 15:28:05.438818 containerd[1667]: time="2025-12-16T15:28:05.438787811Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 15:28:05.439260 kubelet[3007]: E1216 15:28:05.439149 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 15:28:05.439548 kubelet[3007]: E1216 15:28:05.439260 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 15:28:05.439548 kubelet[3007]: E1216 15:28:05.439488 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-c66d96545-p5cf6_calico-system(ee0bd064-b3c2-43ac-bd37-37a79e955339): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 15:28:05.441315 containerd[1667]: time="2025-12-16T15:28:05.440992296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 15:28:05.451734 systemd-networkd[1577]: vxlan.calico: Gained IPv6LL Dec 16 15:28:05.751291 containerd[1667]: time="2025-12-16T15:28:05.751041627Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:28:05.753112 containerd[1667]: time="2025-12-16T15:28:05.752941806Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 15:28:05.753112 containerd[1667]: time="2025-12-16T15:28:05.753051613Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 15:28:05.754641 kubelet[3007]: E1216 15:28:05.753319 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 15:28:05.754641 kubelet[3007]: E1216 15:28:05.753393 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 15:28:05.754641 kubelet[3007]: E1216 15:28:05.753733 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-c66d96545-p5cf6_calico-system(ee0bd064-b3c2-43ac-bd37-37a79e955339): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 15:28:05.754641 kubelet[3007]: E1216 15:28:05.753821 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c66d96545-p5cf6" podUID="ee0bd064-b3c2-43ac-bd37-37a79e955339" Dec 16 15:28:06.120721 kubelet[3007]: E1216 15:28:06.120404 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c66d96545-p5cf6" podUID="ee0bd064-b3c2-43ac-bd37-37a79e955339" Dec 16 15:28:06.170000 audit[5147]: NETFILTER_CFG table=filter:128 family=2 entries=14 op=nft_register_rule pid=5147 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:28:06.170000 audit[5147]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffffb8c4b50 a2=0 a3=7ffffb8c4b3c items=0 ppid=3127 pid=5147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:06.170000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:28:06.177000 audit[5147]: NETFILTER_CFG table=nat:129 family=2 entries=20 op=nft_register_rule pid=5147 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:28:06.177000 audit[5147]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffffb8c4b50 a2=0 a3=7ffffb8c4b3c items=0 ppid=3127 pid=5147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:06.177000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:28:06.668293 systemd-networkd[1577]: cali84b13670d53: Gained IPv6LL Dec 16 15:28:07.126313 kubelet[3007]: E1216 15:28:07.126216 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c66d96545-p5cf6" podUID="ee0bd064-b3c2-43ac-bd37-37a79e955339" Dec 16 15:28:09.342982 containerd[1667]: time="2025-12-16T15:28:09.342901669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kpx8g,Uid:8a3ce37e-c24e-49fb-956d-3bca98f84f79,Namespace:kube-system,Attempt:0,}" Dec 16 15:28:09.344262 containerd[1667]: time="2025-12-16T15:28:09.344141547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b4bf85fc-bgpbx,Uid:42b32087-b938-4963-9aa0-ab40f5c370b3,Namespace:calico-apiserver,Attempt:0,}" Dec 16 15:28:09.593233 systemd-networkd[1577]: calieac524e6f53: Link UP Dec 16 15:28:09.594945 systemd-networkd[1577]: calieac524e6f53: Gained carrier Dec 16 15:28:09.635206 containerd[1667]: 2025-12-16 15:28:09.457 [INFO][5151] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--b4bf85fc--bgpbx-eth0 calico-apiserver-b4bf85fc- calico-apiserver 42b32087-b938-4963-9aa0-ab40f5c370b3 883 0 2025-12-16 15:27:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b4bf85fc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-g2i2t.gb1.brightbox.com calico-apiserver-b4bf85fc-bgpbx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calieac524e6f53 [] [] }} ContainerID="4da101b35dabda2e5dc58e56bc72977541e92427f1f7f5ab77e2b6c2495fb6b6" Namespace="calico-apiserver" Pod="calico-apiserver-b4bf85fc-bgpbx" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--b4bf85fc--bgpbx-" Dec 16 15:28:09.635206 containerd[1667]: 2025-12-16 15:28:09.458 [INFO][5151] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4da101b35dabda2e5dc58e56bc72977541e92427f1f7f5ab77e2b6c2495fb6b6" Namespace="calico-apiserver" Pod="calico-apiserver-b4bf85fc-bgpbx" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--b4bf85fc--bgpbx-eth0" Dec 16 15:28:09.635206 containerd[1667]: 2025-12-16 15:28:09.514 [INFO][5178] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4da101b35dabda2e5dc58e56bc72977541e92427f1f7f5ab77e2b6c2495fb6b6" HandleID="k8s-pod-network.4da101b35dabda2e5dc58e56bc72977541e92427f1f7f5ab77e2b6c2495fb6b6" Workload="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--b4bf85fc--bgpbx-eth0" Dec 16 15:28:09.635206 containerd[1667]: 2025-12-16 15:28:09.514 [INFO][5178] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4da101b35dabda2e5dc58e56bc72977541e92427f1f7f5ab77e2b6c2495fb6b6" HandleID="k8s-pod-network.4da101b35dabda2e5dc58e56bc72977541e92427f1f7f5ab77e2b6c2495fb6b6" Workload="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--b4bf85fc--bgpbx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-g2i2t.gb1.brightbox.com", "pod":"calico-apiserver-b4bf85fc-bgpbx", "timestamp":"2025-12-16 15:28:09.514540021 +0000 UTC"}, Hostname:"srv-g2i2t.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 15:28:09.635206 containerd[1667]: 2025-12-16 15:28:09.514 [INFO][5178] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 15:28:09.635206 containerd[1667]: 2025-12-16 15:28:09.514 [INFO][5178] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 15:28:09.635206 containerd[1667]: 2025-12-16 15:28:09.514 [INFO][5178] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-g2i2t.gb1.brightbox.com' Dec 16 15:28:09.635206 containerd[1667]: 2025-12-16 15:28:09.526 [INFO][5178] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4da101b35dabda2e5dc58e56bc72977541e92427f1f7f5ab77e2b6c2495fb6b6" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:09.635206 containerd[1667]: 2025-12-16 15:28:09.533 [INFO][5178] ipam/ipam.go 394: Looking up existing affinities for host host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:09.635206 containerd[1667]: 2025-12-16 15:28:09.539 [INFO][5178] ipam/ipam.go 511: Trying affinity for 192.168.126.192/26 host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:09.635206 containerd[1667]: 2025-12-16 15:28:09.542 [INFO][5178] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.192/26 host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:09.635206 containerd[1667]: 2025-12-16 15:28:09.546 [INFO][5178] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.192/26 host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:09.635206 containerd[1667]: 2025-12-16 15:28:09.546 [INFO][5178] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.126.192/26 handle="k8s-pod-network.4da101b35dabda2e5dc58e56bc72977541e92427f1f7f5ab77e2b6c2495fb6b6" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:09.635206 containerd[1667]: 2025-12-16 15:28:09.550 [INFO][5178] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4da101b35dabda2e5dc58e56bc72977541e92427f1f7f5ab77e2b6c2495fb6b6 Dec 16 15:28:09.635206 containerd[1667]: 2025-12-16 15:28:09.559 [INFO][5178] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.126.192/26 handle="k8s-pod-network.4da101b35dabda2e5dc58e56bc72977541e92427f1f7f5ab77e2b6c2495fb6b6" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:09.635206 containerd[1667]: 2025-12-16 15:28:09.572 [INFO][5178] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.126.199/26] block=192.168.126.192/26 handle="k8s-pod-network.4da101b35dabda2e5dc58e56bc72977541e92427f1f7f5ab77e2b6c2495fb6b6" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:09.635206 containerd[1667]: 2025-12-16 15:28:09.572 [INFO][5178] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.199/26] handle="k8s-pod-network.4da101b35dabda2e5dc58e56bc72977541e92427f1f7f5ab77e2b6c2495fb6b6" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:09.635206 containerd[1667]: 2025-12-16 15:28:09.574 [INFO][5178] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 15:28:09.635206 containerd[1667]: 2025-12-16 15:28:09.574 [INFO][5178] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.126.199/26] IPv6=[] ContainerID="4da101b35dabda2e5dc58e56bc72977541e92427f1f7f5ab77e2b6c2495fb6b6" HandleID="k8s-pod-network.4da101b35dabda2e5dc58e56bc72977541e92427f1f7f5ab77e2b6c2495fb6b6" Workload="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--b4bf85fc--bgpbx-eth0" Dec 16 15:28:09.637223 containerd[1667]: 2025-12-16 15:28:09.587 [INFO][5151] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4da101b35dabda2e5dc58e56bc72977541e92427f1f7f5ab77e2b6c2495fb6b6" Namespace="calico-apiserver" Pod="calico-apiserver-b4bf85fc-bgpbx" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--b4bf85fc--bgpbx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--b4bf85fc--bgpbx-eth0", GenerateName:"calico-apiserver-b4bf85fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"42b32087-b938-4963-9aa0-ab40f5c370b3", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 15, 27, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b4bf85fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-g2i2t.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-b4bf85fc-bgpbx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieac524e6f53", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 15:28:09.637223 containerd[1667]: 2025-12-16 15:28:09.587 [INFO][5151] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.199/32] ContainerID="4da101b35dabda2e5dc58e56bc72977541e92427f1f7f5ab77e2b6c2495fb6b6" Namespace="calico-apiserver" Pod="calico-apiserver-b4bf85fc-bgpbx" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--b4bf85fc--bgpbx-eth0" Dec 16 15:28:09.637223 containerd[1667]: 2025-12-16 15:28:09.587 [INFO][5151] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieac524e6f53 ContainerID="4da101b35dabda2e5dc58e56bc72977541e92427f1f7f5ab77e2b6c2495fb6b6" Namespace="calico-apiserver" Pod="calico-apiserver-b4bf85fc-bgpbx" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--b4bf85fc--bgpbx-eth0" Dec 16 15:28:09.637223 containerd[1667]: 2025-12-16 15:28:09.593 [INFO][5151] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4da101b35dabda2e5dc58e56bc72977541e92427f1f7f5ab77e2b6c2495fb6b6" Namespace="calico-apiserver" Pod="calico-apiserver-b4bf85fc-bgpbx" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--b4bf85fc--bgpbx-eth0" Dec 16 15:28:09.637223 containerd[1667]: 2025-12-16 15:28:09.596 [INFO][5151] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4da101b35dabda2e5dc58e56bc72977541e92427f1f7f5ab77e2b6c2495fb6b6" Namespace="calico-apiserver" Pod="calico-apiserver-b4bf85fc-bgpbx" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--b4bf85fc--bgpbx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--b4bf85fc--bgpbx-eth0", GenerateName:"calico-apiserver-b4bf85fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"42b32087-b938-4963-9aa0-ab40f5c370b3", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 15, 27, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b4bf85fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-g2i2t.gb1.brightbox.com", ContainerID:"4da101b35dabda2e5dc58e56bc72977541e92427f1f7f5ab77e2b6c2495fb6b6", Pod:"calico-apiserver-b4bf85fc-bgpbx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieac524e6f53", MAC:"fe:35:84:67:e1:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 15:28:09.637223 containerd[1667]: 2025-12-16 15:28:09.621 [INFO][5151] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4da101b35dabda2e5dc58e56bc72977541e92427f1f7f5ab77e2b6c2495fb6b6" Namespace="calico-apiserver" Pod="calico-apiserver-b4bf85fc-bgpbx" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--b4bf85fc--bgpbx-eth0" Dec 16 15:28:09.713813 containerd[1667]: time="2025-12-16T15:28:09.713736841Z" level=info msg="connecting to shim 4da101b35dabda2e5dc58e56bc72977541e92427f1f7f5ab77e2b6c2495fb6b6" address="unix:///run/containerd/s/e5426be144bbd0bbca0f21155551e34e1ac7dd2a59d68fe855e4c701e0a2a6a5" namespace=k8s.io protocol=ttrpc version=3 Dec 16 15:28:09.761244 systemd-networkd[1577]: cali7a3a583b38d: Link UP Dec 16 15:28:09.762477 systemd-networkd[1577]: cali7a3a583b38d: Gained carrier Dec 16 15:28:09.766652 systemd[1]: Started cri-containerd-4da101b35dabda2e5dc58e56bc72977541e92427f1f7f5ab77e2b6c2495fb6b6.scope - libcontainer container 4da101b35dabda2e5dc58e56bc72977541e92427f1f7f5ab77e2b6c2495fb6b6. Dec 16 15:28:09.786714 kernel: kauditd_printk_skb: 135 callbacks suppressed Dec 16 15:28:09.786988 kernel: audit: type=1325 audit(1765898889.768:734): table=filter:130 family=2 entries=59 op=nft_register_chain pid=5230 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 15:28:09.768000 audit[5230]: NETFILTER_CFG table=filter:130 family=2 entries=59 op=nft_register_chain pid=5230 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 15:28:09.796706 kernel: audit: type=1300 audit(1765898889.768:734): arch=c000003e syscall=46 success=yes exit=29492 a0=3 a1=7ffdc72bcea0 a2=0 a3=7ffdc72bce8c items=0 ppid=4801 pid=5230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:09.768000 audit[5230]: SYSCALL arch=c000003e syscall=46 success=yes exit=29492 a0=3 a1=7ffdc72bcea0 a2=0 a3=7ffdc72bce8c items=0 ppid=4801 pid=5230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:09.768000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 15:28:09.811558 kernel: audit: type=1327 audit(1765898889.768:734): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 15:28:09.813967 containerd[1667]: 2025-12-16 15:28:09.439 [INFO][5152] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--g2i2t.gb1.brightbox.com-k8s-coredns--66bc5c9577--kpx8g-eth0 coredns-66bc5c9577- kube-system 8a3ce37e-c24e-49fb-956d-3bca98f84f79 878 0 2025-12-16 15:27:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-g2i2t.gb1.brightbox.com coredns-66bc5c9577-kpx8g eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7a3a583b38d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="c38745d341abd2909160a1091135e0e65330683bde46fcb6fd7ebebed6d08331" Namespace="kube-system" Pod="coredns-66bc5c9577-kpx8g" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-coredns--66bc5c9577--kpx8g-" Dec 16 15:28:09.813967 containerd[1667]: 2025-12-16 15:28:09.439 [INFO][5152] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c38745d341abd2909160a1091135e0e65330683bde46fcb6fd7ebebed6d08331" Namespace="kube-system" Pod="coredns-66bc5c9577-kpx8g" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-coredns--66bc5c9577--kpx8g-eth0" Dec 16 15:28:09.813967 containerd[1667]: 2025-12-16 15:28:09.520 [INFO][5173] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c38745d341abd2909160a1091135e0e65330683bde46fcb6fd7ebebed6d08331" HandleID="k8s-pod-network.c38745d341abd2909160a1091135e0e65330683bde46fcb6fd7ebebed6d08331" Workload="srv--g2i2t.gb1.brightbox.com-k8s-coredns--66bc5c9577--kpx8g-eth0" Dec 16 15:28:09.813967 containerd[1667]: 2025-12-16 15:28:09.521 [INFO][5173] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c38745d341abd2909160a1091135e0e65330683bde46fcb6fd7ebebed6d08331" HandleID="k8s-pod-network.c38745d341abd2909160a1091135e0e65330683bde46fcb6fd7ebebed6d08331" Workload="srv--g2i2t.gb1.brightbox.com-k8s-coredns--66bc5c9577--kpx8g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003da200), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-g2i2t.gb1.brightbox.com", "pod":"coredns-66bc5c9577-kpx8g", "timestamp":"2025-12-16 15:28:09.520309546 +0000 UTC"}, Hostname:"srv-g2i2t.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 15:28:09.813967 containerd[1667]: 2025-12-16 15:28:09.521 [INFO][5173] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 15:28:09.813967 containerd[1667]: 2025-12-16 15:28:09.573 [INFO][5173] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 15:28:09.813967 containerd[1667]: 2025-12-16 15:28:09.573 [INFO][5173] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-g2i2t.gb1.brightbox.com' Dec 16 15:28:09.813967 containerd[1667]: 2025-12-16 15:28:09.640 [INFO][5173] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c38745d341abd2909160a1091135e0e65330683bde46fcb6fd7ebebed6d08331" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:09.813967 containerd[1667]: 2025-12-16 15:28:09.652 [INFO][5173] ipam/ipam.go 394: Looking up existing affinities for host host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:09.813967 containerd[1667]: 2025-12-16 15:28:09.662 [INFO][5173] ipam/ipam.go 511: Trying affinity for 192.168.126.192/26 host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:09.813967 containerd[1667]: 2025-12-16 15:28:09.666 [INFO][5173] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.192/26 host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:09.813967 containerd[1667]: 2025-12-16 15:28:09.697 [INFO][5173] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.192/26 host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:09.813967 containerd[1667]: 2025-12-16 15:28:09.697 [INFO][5173] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.126.192/26 handle="k8s-pod-network.c38745d341abd2909160a1091135e0e65330683bde46fcb6fd7ebebed6d08331" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:09.813967 containerd[1667]: 2025-12-16 15:28:09.702 [INFO][5173] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c38745d341abd2909160a1091135e0e65330683bde46fcb6fd7ebebed6d08331 Dec 16 15:28:09.813967 containerd[1667]: 2025-12-16 15:28:09.712 [INFO][5173] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.126.192/26 handle="k8s-pod-network.c38745d341abd2909160a1091135e0e65330683bde46fcb6fd7ebebed6d08331" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:09.813967 containerd[1667]: 2025-12-16 15:28:09.733 [INFO][5173] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.126.200/26] block=192.168.126.192/26 handle="k8s-pod-network.c38745d341abd2909160a1091135e0e65330683bde46fcb6fd7ebebed6d08331" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:09.813967 containerd[1667]: 2025-12-16 15:28:09.734 [INFO][5173] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.200/26] handle="k8s-pod-network.c38745d341abd2909160a1091135e0e65330683bde46fcb6fd7ebebed6d08331" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:09.813967 containerd[1667]: 2025-12-16 15:28:09.734 [INFO][5173] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 15:28:09.813967 containerd[1667]: 2025-12-16 15:28:09.734 [INFO][5173] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.126.200/26] IPv6=[] ContainerID="c38745d341abd2909160a1091135e0e65330683bde46fcb6fd7ebebed6d08331" HandleID="k8s-pod-network.c38745d341abd2909160a1091135e0e65330683bde46fcb6fd7ebebed6d08331" Workload="srv--g2i2t.gb1.brightbox.com-k8s-coredns--66bc5c9577--kpx8g-eth0" Dec 16 15:28:09.815096 containerd[1667]: 2025-12-16 15:28:09.751 [INFO][5152] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c38745d341abd2909160a1091135e0e65330683bde46fcb6fd7ebebed6d08331" Namespace="kube-system" Pod="coredns-66bc5c9577-kpx8g" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-coredns--66bc5c9577--kpx8g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--g2i2t.gb1.brightbox.com-k8s-coredns--66bc5c9577--kpx8g-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8a3ce37e-c24e-49fb-956d-3bca98f84f79", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 15, 27, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-g2i2t.gb1.brightbox.com", ContainerID:"", Pod:"coredns-66bc5c9577-kpx8g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7a3a583b38d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 15:28:09.815096 containerd[1667]: 2025-12-16 15:28:09.752 [INFO][5152] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.200/32] ContainerID="c38745d341abd2909160a1091135e0e65330683bde46fcb6fd7ebebed6d08331" Namespace="kube-system" Pod="coredns-66bc5c9577-kpx8g" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-coredns--66bc5c9577--kpx8g-eth0" Dec 16 15:28:09.815096 containerd[1667]: 2025-12-16 15:28:09.752 [INFO][5152] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a3a583b38d ContainerID="c38745d341abd2909160a1091135e0e65330683bde46fcb6fd7ebebed6d08331" Namespace="kube-system" Pod="coredns-66bc5c9577-kpx8g" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-coredns--66bc5c9577--kpx8g-eth0" Dec 16 15:28:09.815096 containerd[1667]: 2025-12-16 15:28:09.763 [INFO][5152] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c38745d341abd2909160a1091135e0e65330683bde46fcb6fd7ebebed6d08331" Namespace="kube-system" Pod="coredns-66bc5c9577-kpx8g" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-coredns--66bc5c9577--kpx8g-eth0" Dec 16 15:28:09.815096 containerd[1667]: 2025-12-16 15:28:09.768 [INFO][5152] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c38745d341abd2909160a1091135e0e65330683bde46fcb6fd7ebebed6d08331" Namespace="kube-system" Pod="coredns-66bc5c9577-kpx8g" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-coredns--66bc5c9577--kpx8g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--g2i2t.gb1.brightbox.com-k8s-coredns--66bc5c9577--kpx8g-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8a3ce37e-c24e-49fb-956d-3bca98f84f79", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 15, 27, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-g2i2t.gb1.brightbox.com", ContainerID:"c38745d341abd2909160a1091135e0e65330683bde46fcb6fd7ebebed6d08331", Pod:"coredns-66bc5c9577-kpx8g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7a3a583b38d", MAC:"e2:34:9f:f9:0b:18", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 15:28:09.815432 containerd[1667]: 2025-12-16 15:28:09.805 [INFO][5152] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c38745d341abd2909160a1091135e0e65330683bde46fcb6fd7ebebed6d08331" Namespace="kube-system" Pod="coredns-66bc5c9577-kpx8g" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-coredns--66bc5c9577--kpx8g-eth0" Dec 16 15:28:09.851929 containerd[1667]: time="2025-12-16T15:28:09.851794794Z" level=info msg="connecting to shim c38745d341abd2909160a1091135e0e65330683bde46fcb6fd7ebebed6d08331" address="unix:///run/containerd/s/c43ac9f79f4fead417d8c894a0a3f14ceacb3e953ed1aa963f21d2ed51b4f201" namespace=k8s.io protocol=ttrpc version=3 Dec 16 15:28:09.886621 kernel: audit: type=1334 audit(1765898889.882:735): prog-id=245 op=LOAD Dec 16 15:28:09.888628 kernel: audit: type=1334 audit(1765898889.886:736): prog-id=246 op=LOAD Dec 16 15:28:09.882000 audit: BPF prog-id=245 op=LOAD Dec 16 15:28:09.886000 audit: BPF prog-id=246 op=LOAD Dec 16 15:28:09.886000 audit[5218]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=5205 pid=5218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:09.895543 kernel: audit: type=1300 audit(1765898889.886:736): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=5205 pid=5218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:09.886000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464613130316233356461626461326535646335386535366263373239 Dec 16 15:28:09.902649 kernel: audit: type=1327 audit(1765898889.886:736): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464613130316233356461626461326535646335386535366263373239 Dec 16 15:28:09.905363 kernel: audit: type=1334 audit(1765898889.886:737): prog-id=246 op=UNLOAD Dec 16 15:28:09.886000 audit: BPF prog-id=246 op=UNLOAD Dec 16 15:28:09.907619 kernel: audit: type=1300 audit(1765898889.886:737): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5205 pid=5218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:09.886000 audit[5218]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5205 pid=5218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:09.886000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464613130316233356461626461326535646335386535366263373239 Dec 16 15:28:09.918545 kernel: audit: type=1327 audit(1765898889.886:737): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464613130316233356461626461326535646335386535366263373239 Dec 16 15:28:09.887000 audit: BPF prog-id=247 op=LOAD Dec 16 15:28:09.887000 audit[5218]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=5205 pid=5218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:09.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464613130316233356461626461326535646335386535366263373239 Dec 16 15:28:09.887000 audit: BPF prog-id=248 op=LOAD Dec 16 15:28:09.887000 audit[5218]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=5205 pid=5218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:09.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464613130316233356461626461326535646335386535366263373239 Dec 16 15:28:09.887000 audit: BPF prog-id=248 op=UNLOAD Dec 16 15:28:09.887000 audit[5218]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5205 pid=5218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:09.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464613130316233356461626461326535646335386535366263373239 Dec 16 15:28:09.887000 audit: BPF prog-id=247 op=UNLOAD Dec 16 15:28:09.887000 audit[5218]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5205 pid=5218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:09.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464613130316233356461626461326535646335386535366263373239 Dec 16 15:28:09.887000 audit: BPF prog-id=249 op=LOAD Dec 16 15:28:09.887000 audit[5218]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=5205 pid=5218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:09.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464613130316233356461626461326535646335386535366263373239 Dec 16 15:28:09.934556 systemd[1]: Started cri-containerd-c38745d341abd2909160a1091135e0e65330683bde46fcb6fd7ebebed6d08331.scope - libcontainer container c38745d341abd2909160a1091135e0e65330683bde46fcb6fd7ebebed6d08331. Dec 16 15:28:09.953000 audit: BPF prog-id=250 op=LOAD Dec 16 15:28:09.954000 audit: BPF prog-id=251 op=LOAD Dec 16 15:28:09.954000 audit[5270]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=5256 pid=5270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:09.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333383734356433343161626432393039313630613130393131333565 Dec 16 15:28:09.954000 audit: BPF prog-id=251 op=UNLOAD Dec 16 15:28:09.954000 audit[5270]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5256 pid=5270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:09.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333383734356433343161626432393039313630613130393131333565 Dec 16 15:28:09.955000 audit: BPF prog-id=252 op=LOAD Dec 16 15:28:09.955000 audit[5270]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=5256 pid=5270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:09.955000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333383734356433343161626432393039313630613130393131333565 Dec 16 15:28:09.955000 audit: BPF prog-id=253 op=LOAD Dec 16 15:28:09.955000 audit[5270]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=5256 pid=5270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:09.955000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333383734356433343161626432393039313630613130393131333565 Dec 16 15:28:09.956000 audit: BPF prog-id=253 op=UNLOAD Dec 16 15:28:09.956000 audit[5270]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5256 pid=5270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:09.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333383734356433343161626432393039313630613130393131333565 Dec 16 15:28:09.958000 audit: BPF prog-id=252 op=UNLOAD Dec 16 15:28:09.958000 audit[5270]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5256 pid=5270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:09.958000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333383734356433343161626432393039313630613130393131333565 Dec 16 15:28:09.958000 audit: BPF prog-id=254 op=LOAD Dec 16 15:28:09.958000 audit[5270]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=5256 pid=5270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:09.958000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333383734356433343161626432393039313630613130393131333565 Dec 16 15:28:10.011000 audit[5295]: NETFILTER_CFG table=filter:131 family=2 entries=48 op=nft_register_chain pid=5295 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 15:28:10.011000 audit[5295]: SYSCALL arch=c000003e syscall=46 success=yes exit=22704 a0=3 a1=7ffc8ad6b990 a2=0 a3=7ffc8ad6b97c items=0 ppid=4801 pid=5295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:10.011000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 15:28:10.027963 containerd[1667]: time="2025-12-16T15:28:10.027906507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b4bf85fc-bgpbx,Uid:42b32087-b938-4963-9aa0-ab40f5c370b3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4da101b35dabda2e5dc58e56bc72977541e92427f1f7f5ab77e2b6c2495fb6b6\"" Dec 16 15:28:10.032844 containerd[1667]: time="2025-12-16T15:28:10.032801004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 15:28:10.045940 containerd[1667]: time="2025-12-16T15:28:10.045836354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kpx8g,Uid:8a3ce37e-c24e-49fb-956d-3bca98f84f79,Namespace:kube-system,Attempt:0,} returns sandbox id \"c38745d341abd2909160a1091135e0e65330683bde46fcb6fd7ebebed6d08331\"" Dec 16 15:28:10.056144 containerd[1667]: time="2025-12-16T15:28:10.056021768Z" level=info msg="CreateContainer within sandbox \"c38745d341abd2909160a1091135e0e65330683bde46fcb6fd7ebebed6d08331\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 15:28:10.071433 containerd[1667]: time="2025-12-16T15:28:10.070751617Z" level=info msg="Container 9151268e338e329794487a3793995d16461b092e2bba623f84b82b299e025482: CDI devices from CRI Config.CDIDevices: []" Dec 16 15:28:10.081167 containerd[1667]: time="2025-12-16T15:28:10.081100425Z" level=info msg="CreateContainer within sandbox \"c38745d341abd2909160a1091135e0e65330683bde46fcb6fd7ebebed6d08331\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9151268e338e329794487a3793995d16461b092e2bba623f84b82b299e025482\"" Dec 16 15:28:10.083194 containerd[1667]: time="2025-12-16T15:28:10.083149509Z" level=info msg="StartContainer for \"9151268e338e329794487a3793995d16461b092e2bba623f84b82b299e025482\"" Dec 16 15:28:10.086223 containerd[1667]: time="2025-12-16T15:28:10.086187425Z" level=info msg="connecting to shim 9151268e338e329794487a3793995d16461b092e2bba623f84b82b299e025482" address="unix:///run/containerd/s/c43ac9f79f4fead417d8c894a0a3f14ceacb3e953ed1aa963f21d2ed51b4f201" protocol=ttrpc version=3 Dec 16 15:28:10.120799 systemd[1]: Started cri-containerd-9151268e338e329794487a3793995d16461b092e2bba623f84b82b299e025482.scope - libcontainer container 9151268e338e329794487a3793995d16461b092e2bba623f84b82b299e025482. Dec 16 15:28:10.153000 audit: BPF prog-id=255 op=LOAD Dec 16 15:28:10.154000 audit: BPF prog-id=256 op=LOAD Dec 16 15:28:10.154000 audit[5308]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=5256 pid=5308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:10.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931353132363865333338653332393739343438376133373933393935 Dec 16 15:28:10.155000 audit: BPF prog-id=256 op=UNLOAD Dec 16 15:28:10.155000 audit[5308]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5256 pid=5308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:10.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931353132363865333338653332393739343438376133373933393935 Dec 16 15:28:10.155000 audit: BPF prog-id=257 op=LOAD Dec 16 15:28:10.155000 audit[5308]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=5256 pid=5308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:10.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931353132363865333338653332393739343438376133373933393935 Dec 16 15:28:10.155000 audit: BPF prog-id=258 op=LOAD Dec 16 15:28:10.155000 audit[5308]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=5256 pid=5308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:10.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931353132363865333338653332393739343438376133373933393935 Dec 16 15:28:10.155000 audit: BPF prog-id=258 op=UNLOAD Dec 16 15:28:10.155000 audit[5308]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5256 pid=5308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:10.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931353132363865333338653332393739343438376133373933393935 Dec 16 15:28:10.155000 audit: BPF prog-id=257 op=UNLOAD Dec 16 15:28:10.155000 audit[5308]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5256 pid=5308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:10.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931353132363865333338653332393739343438376133373933393935 Dec 16 15:28:10.155000 audit: BPF prog-id=259 op=LOAD Dec 16 15:28:10.155000 audit[5308]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=5256 pid=5308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:10.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931353132363865333338653332393739343438376133373933393935 Dec 16 15:28:10.185873 containerd[1667]: time="2025-12-16T15:28:10.185803311Z" level=info msg="StartContainer for \"9151268e338e329794487a3793995d16461b092e2bba623f84b82b299e025482\" returns successfully" Dec 16 15:28:10.341316 containerd[1667]: time="2025-12-16T15:28:10.341254862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75675d747f-2cjwb,Uid:71c27099-4499-4f5b-8630-5d35b5c1100b,Namespace:calico-system,Attempt:0,}" Dec 16 15:28:10.393635 containerd[1667]: time="2025-12-16T15:28:10.393348216Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:28:10.400105 containerd[1667]: time="2025-12-16T15:28:10.400055278Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 15:28:10.400354 containerd[1667]: time="2025-12-16T15:28:10.400227499Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 15:28:10.400735 kubelet[3007]: E1216 15:28:10.400684 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 15:28:10.402535 kubelet[3007]: E1216 15:28:10.401150 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 15:28:10.402535 kubelet[3007]: E1216 15:28:10.401285 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-b4bf85fc-bgpbx_calico-apiserver(42b32087-b938-4963-9aa0-ab40f5c370b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 15:28:10.402535 kubelet[3007]: E1216 15:28:10.401345 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b4bf85fc-bgpbx" podUID="42b32087-b938-4963-9aa0-ab40f5c370b3" Dec 16 15:28:10.527058 systemd-networkd[1577]: cali7759207d8e8: Link UP Dec 16 15:28:10.528769 systemd-networkd[1577]: cali7759207d8e8: Gained carrier Dec 16 15:28:10.555122 containerd[1667]: 2025-12-16 15:28:10.412 [INFO][5345] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--g2i2t.gb1.brightbox.com-k8s-calico--kube--controllers--75675d747f--2cjwb-eth0 calico-kube-controllers-75675d747f- calico-system 71c27099-4499-4f5b-8630-5d35b5c1100b 892 0 2025-12-16 15:27:27 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:75675d747f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-g2i2t.gb1.brightbox.com calico-kube-controllers-75675d747f-2cjwb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7759207d8e8 [] [] }} ContainerID="a6308ce08ab9567dfa3e201c454e3c58171395011a726221d18d3d70fec0aca2" Namespace="calico-system" Pod="calico-kube-controllers-75675d747f-2cjwb" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-calico--kube--controllers--75675d747f--2cjwb-" Dec 16 15:28:10.555122 containerd[1667]: 2025-12-16 15:28:10.412 [INFO][5345] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a6308ce08ab9567dfa3e201c454e3c58171395011a726221d18d3d70fec0aca2" Namespace="calico-system" Pod="calico-kube-controllers-75675d747f-2cjwb" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-calico--kube--controllers--75675d747f--2cjwb-eth0" Dec 16 15:28:10.555122 containerd[1667]: 2025-12-16 15:28:10.468 [INFO][5357] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6308ce08ab9567dfa3e201c454e3c58171395011a726221d18d3d70fec0aca2" HandleID="k8s-pod-network.a6308ce08ab9567dfa3e201c454e3c58171395011a726221d18d3d70fec0aca2" Workload="srv--g2i2t.gb1.brightbox.com-k8s-calico--kube--controllers--75675d747f--2cjwb-eth0" Dec 16 15:28:10.555122 containerd[1667]: 2025-12-16 15:28:10.468 [INFO][5357] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a6308ce08ab9567dfa3e201c454e3c58171395011a726221d18d3d70fec0aca2" HandleID="k8s-pod-network.a6308ce08ab9567dfa3e201c454e3c58171395011a726221d18d3d70fec0aca2" Workload="srv--g2i2t.gb1.brightbox.com-k8s-calico--kube--controllers--75675d747f--2cjwb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-g2i2t.gb1.brightbox.com", "pod":"calico-kube-controllers-75675d747f-2cjwb", "timestamp":"2025-12-16 15:28:10.468018759 +0000 UTC"}, Hostname:"srv-g2i2t.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 15:28:10.555122 containerd[1667]: 2025-12-16 15:28:10.468 [INFO][5357] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 15:28:10.555122 containerd[1667]: 2025-12-16 15:28:10.468 [INFO][5357] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 15:28:10.555122 containerd[1667]: 2025-12-16 15:28:10.468 [INFO][5357] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-g2i2t.gb1.brightbox.com' Dec 16 15:28:10.555122 containerd[1667]: 2025-12-16 15:28:10.478 [INFO][5357] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a6308ce08ab9567dfa3e201c454e3c58171395011a726221d18d3d70fec0aca2" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:10.555122 containerd[1667]: 2025-12-16 15:28:10.485 [INFO][5357] ipam/ipam.go 394: Looking up existing affinities for host host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:10.555122 containerd[1667]: 2025-12-16 15:28:10.491 [INFO][5357] ipam/ipam.go 511: Trying affinity for 192.168.126.192/26 host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:10.555122 containerd[1667]: 2025-12-16 15:28:10.493 [INFO][5357] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.192/26 host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:10.555122 containerd[1667]: 2025-12-16 15:28:10.497 [INFO][5357] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.192/26 host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:10.555122 containerd[1667]: 2025-12-16 15:28:10.497 [INFO][5357] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.126.192/26 handle="k8s-pod-network.a6308ce08ab9567dfa3e201c454e3c58171395011a726221d18d3d70fec0aca2" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:10.555122 containerd[1667]: 2025-12-16 15:28:10.499 [INFO][5357] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a6308ce08ab9567dfa3e201c454e3c58171395011a726221d18d3d70fec0aca2 Dec 16 15:28:10.555122 containerd[1667]: 2025-12-16 15:28:10.507 [INFO][5357] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.126.192/26 handle="k8s-pod-network.a6308ce08ab9567dfa3e201c454e3c58171395011a726221d18d3d70fec0aca2" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:10.555122 containerd[1667]: 2025-12-16 15:28:10.518 [INFO][5357] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.126.201/26] block=192.168.126.192/26 handle="k8s-pod-network.a6308ce08ab9567dfa3e201c454e3c58171395011a726221d18d3d70fec0aca2" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:10.555122 containerd[1667]: 2025-12-16 15:28:10.518 [INFO][5357] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.201/26] handle="k8s-pod-network.a6308ce08ab9567dfa3e201c454e3c58171395011a726221d18d3d70fec0aca2" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:10.555122 containerd[1667]: 2025-12-16 15:28:10.519 [INFO][5357] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 15:28:10.555122 containerd[1667]: 2025-12-16 15:28:10.519 [INFO][5357] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.126.201/26] IPv6=[] ContainerID="a6308ce08ab9567dfa3e201c454e3c58171395011a726221d18d3d70fec0aca2" HandleID="k8s-pod-network.a6308ce08ab9567dfa3e201c454e3c58171395011a726221d18d3d70fec0aca2" Workload="srv--g2i2t.gb1.brightbox.com-k8s-calico--kube--controllers--75675d747f--2cjwb-eth0" Dec 16 15:28:10.560989 containerd[1667]: 2025-12-16 15:28:10.522 [INFO][5345] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a6308ce08ab9567dfa3e201c454e3c58171395011a726221d18d3d70fec0aca2" Namespace="calico-system" Pod="calico-kube-controllers-75675d747f-2cjwb" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-calico--kube--controllers--75675d747f--2cjwb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--g2i2t.gb1.brightbox.com-k8s-calico--kube--controllers--75675d747f--2cjwb-eth0", GenerateName:"calico-kube-controllers-75675d747f-", Namespace:"calico-system", SelfLink:"", UID:"71c27099-4499-4f5b-8630-5d35b5c1100b", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 15, 27, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75675d747f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-g2i2t.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-75675d747f-2cjwb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7759207d8e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 15:28:10.560989 containerd[1667]: 2025-12-16 15:28:10.522 [INFO][5345] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.201/32] ContainerID="a6308ce08ab9567dfa3e201c454e3c58171395011a726221d18d3d70fec0aca2" Namespace="calico-system" Pod="calico-kube-controllers-75675d747f-2cjwb" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-calico--kube--controllers--75675d747f--2cjwb-eth0" Dec 16 15:28:10.560989 containerd[1667]: 2025-12-16 15:28:10.522 [INFO][5345] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7759207d8e8 ContainerID="a6308ce08ab9567dfa3e201c454e3c58171395011a726221d18d3d70fec0aca2" Namespace="calico-system" Pod="calico-kube-controllers-75675d747f-2cjwb" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-calico--kube--controllers--75675d747f--2cjwb-eth0" Dec 16 15:28:10.560989 containerd[1667]: 2025-12-16 15:28:10.529 [INFO][5345] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6308ce08ab9567dfa3e201c454e3c58171395011a726221d18d3d70fec0aca2" Namespace="calico-system" Pod="calico-kube-controllers-75675d747f-2cjwb" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-calico--kube--controllers--75675d747f--2cjwb-eth0" Dec 16 15:28:10.560989 containerd[1667]: 2025-12-16 15:28:10.530 [INFO][5345] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a6308ce08ab9567dfa3e201c454e3c58171395011a726221d18d3d70fec0aca2" Namespace="calico-system" Pod="calico-kube-controllers-75675d747f-2cjwb" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-calico--kube--controllers--75675d747f--2cjwb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--g2i2t.gb1.brightbox.com-k8s-calico--kube--controllers--75675d747f--2cjwb-eth0", GenerateName:"calico-kube-controllers-75675d747f-", Namespace:"calico-system", SelfLink:"", UID:"71c27099-4499-4f5b-8630-5d35b5c1100b", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 15, 27, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75675d747f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-g2i2t.gb1.brightbox.com", ContainerID:"a6308ce08ab9567dfa3e201c454e3c58171395011a726221d18d3d70fec0aca2", Pod:"calico-kube-controllers-75675d747f-2cjwb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7759207d8e8", MAC:"5e:51:62:bb:5e:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 15:28:10.560989 containerd[1667]: 2025-12-16 15:28:10.551 [INFO][5345] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a6308ce08ab9567dfa3e201c454e3c58171395011a726221d18d3d70fec0aca2" Namespace="calico-system" Pod="calico-kube-controllers-75675d747f-2cjwb" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-calico--kube--controllers--75675d747f--2cjwb-eth0" Dec 16 15:28:10.595812 containerd[1667]: time="2025-12-16T15:28:10.595099081Z" level=info msg="connecting to shim a6308ce08ab9567dfa3e201c454e3c58171395011a726221d18d3d70fec0aca2" address="unix:///run/containerd/s/120e7646ebce0aec0f3af110f5c6c818278e56c48ce9a4f1a62ce68093de4325" namespace=k8s.io protocol=ttrpc version=3 Dec 16 15:28:10.603000 audit[5382]: NETFILTER_CFG table=filter:132 family=2 entries=62 op=nft_register_chain pid=5382 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 15:28:10.603000 audit[5382]: SYSCALL arch=c000003e syscall=46 success=yes exit=28352 a0=3 a1=7ffc1bcaaac0 a2=0 a3=7ffc1bcaaaac items=0 ppid=4801 pid=5382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:10.603000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 15:28:10.652912 systemd[1]: Started cri-containerd-a6308ce08ab9567dfa3e201c454e3c58171395011a726221d18d3d70fec0aca2.scope - libcontainer container a6308ce08ab9567dfa3e201c454e3c58171395011a726221d18d3d70fec0aca2. Dec 16 15:28:10.682000 audit: BPF prog-id=260 op=LOAD Dec 16 15:28:10.683000 audit: BPF prog-id=261 op=LOAD Dec 16 15:28:10.683000 audit[5393]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=5381 pid=5393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:10.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136333038636530386162393536376466613365323031633435346533 Dec 16 15:28:10.683000 audit: BPF prog-id=261 op=UNLOAD Dec 16 15:28:10.683000 audit[5393]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5381 pid=5393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:10.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136333038636530386162393536376466613365323031633435346533 Dec 16 15:28:10.683000 audit: BPF prog-id=262 op=LOAD Dec 16 15:28:10.683000 audit[5393]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=5381 pid=5393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:10.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136333038636530386162393536376466613365323031633435346533 Dec 16 15:28:10.683000 audit: BPF prog-id=263 op=LOAD Dec 16 15:28:10.683000 audit[5393]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=5381 pid=5393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:10.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136333038636530386162393536376466613365323031633435346533 Dec 16 15:28:10.683000 audit: BPF prog-id=263 op=UNLOAD Dec 16 15:28:10.683000 audit[5393]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5381 pid=5393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:10.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136333038636530386162393536376466613365323031633435346533 Dec 16 15:28:10.683000 audit: BPF prog-id=262 op=UNLOAD Dec 16 15:28:10.683000 audit[5393]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5381 pid=5393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:10.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136333038636530386162393536376466613365323031633435346533 Dec 16 15:28:10.684000 audit: BPF prog-id=264 op=LOAD Dec 16 15:28:10.684000 audit[5393]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=5381 pid=5393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:10.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136333038636530386162393536376466613365323031633435346533 Dec 16 15:28:10.740118 containerd[1667]: time="2025-12-16T15:28:10.740063971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75675d747f-2cjwb,Uid:71c27099-4499-4f5b-8630-5d35b5c1100b,Namespace:calico-system,Attempt:0,} returns sandbox id \"a6308ce08ab9567dfa3e201c454e3c58171395011a726221d18d3d70fec0aca2\"" Dec 16 15:28:10.744584 containerd[1667]: time="2025-12-16T15:28:10.744508262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 15:28:11.080859 containerd[1667]: time="2025-12-16T15:28:11.080793180Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:28:11.082276 containerd[1667]: time="2025-12-16T15:28:11.082169491Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 15:28:11.082462 containerd[1667]: time="2025-12-16T15:28:11.082248264Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 15:28:11.082694 kubelet[3007]: E1216 15:28:11.082581 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 15:28:11.082796 kubelet[3007]: E1216 15:28:11.082762 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 15:28:11.083152 kubelet[3007]: E1216 15:28:11.083106 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-75675d747f-2cjwb_calico-system(71c27099-4499-4f5b-8630-5d35b5c1100b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 15:28:11.083326 kubelet[3007]: E1216 15:28:11.083278 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75675d747f-2cjwb" podUID="71c27099-4499-4f5b-8630-5d35b5c1100b" Dec 16 15:28:11.164272 kubelet[3007]: E1216 15:28:11.164189 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75675d747f-2cjwb" podUID="71c27099-4499-4f5b-8630-5d35b5c1100b" Dec 16 15:28:11.167938 kubelet[3007]: E1216 15:28:11.167888 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b4bf85fc-bgpbx" podUID="42b32087-b938-4963-9aa0-ab40f5c370b3" Dec 16 15:28:11.252783 kubelet[3007]: I1216 15:28:11.252631 3007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-kpx8g" podStartSLOduration=64.25258932 podStartE2EDuration="1m4.25258932s" podCreationTimestamp="2025-12-16 15:27:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 15:28:11.245913064 +0000 UTC m=+70.255848194" watchObservedRunningTime="2025-12-16 15:28:11.25258932 +0000 UTC m=+70.262524441" Dec 16 15:28:11.274000 audit[5420]: NETFILTER_CFG table=filter:133 family=2 entries=14 op=nft_register_rule pid=5420 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:28:11.274000 audit[5420]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffedc19dfb0 a2=0 a3=7ffedc19df9c items=0 ppid=3127 pid=5420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:11.274000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:28:11.280000 audit[5420]: NETFILTER_CFG table=nat:134 family=2 entries=20 op=nft_register_rule pid=5420 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:28:11.280000 audit[5420]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffedc19dfb0 a2=0 a3=7ffedc19df9c items=0 ppid=3127 pid=5420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:11.280000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:28:11.344914 containerd[1667]: time="2025-12-16T15:28:11.344765308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b4bf85fc-lt27n,Uid:4428d9ef-7f5f-4ea4-93cb-e174a33d2ea2,Namespace:calico-apiserver,Attempt:0,}" Dec 16 15:28:11.404772 systemd-networkd[1577]: calieac524e6f53: Gained IPv6LL Dec 16 15:28:11.567857 systemd-networkd[1577]: calie570344d295: Link UP Dec 16 15:28:11.572692 systemd-networkd[1577]: calie570344d295: Gained carrier Dec 16 15:28:11.594775 containerd[1667]: 2025-12-16 15:28:11.439 [INFO][5421] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--b4bf85fc--lt27n-eth0 calico-apiserver-b4bf85fc- calico-apiserver 4428d9ef-7f5f-4ea4-93cb-e174a33d2ea2 888 0 2025-12-16 15:27:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b4bf85fc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-g2i2t.gb1.brightbox.com calico-apiserver-b4bf85fc-lt27n eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie570344d295 [] [] }} ContainerID="c3cdd91cf11462bcbdaa596b769bbdb11a43a326a38955183f1d29a651e8c57e" Namespace="calico-apiserver" Pod="calico-apiserver-b4bf85fc-lt27n" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--b4bf85fc--lt27n-" Dec 16 15:28:11.594775 containerd[1667]: 2025-12-16 15:28:11.439 [INFO][5421] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c3cdd91cf11462bcbdaa596b769bbdb11a43a326a38955183f1d29a651e8c57e" Namespace="calico-apiserver" Pod="calico-apiserver-b4bf85fc-lt27n" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--b4bf85fc--lt27n-eth0" Dec 16 15:28:11.594775 containerd[1667]: 2025-12-16 15:28:11.491 [INFO][5433] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c3cdd91cf11462bcbdaa596b769bbdb11a43a326a38955183f1d29a651e8c57e" HandleID="k8s-pod-network.c3cdd91cf11462bcbdaa596b769bbdb11a43a326a38955183f1d29a651e8c57e" Workload="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--b4bf85fc--lt27n-eth0" Dec 16 15:28:11.594775 containerd[1667]: 2025-12-16 15:28:11.491 [INFO][5433] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c3cdd91cf11462bcbdaa596b769bbdb11a43a326a38955183f1d29a651e8c57e" HandleID="k8s-pod-network.c3cdd91cf11462bcbdaa596b769bbdb11a43a326a38955183f1d29a651e8c57e" Workload="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--b4bf85fc--lt27n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f6c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-g2i2t.gb1.brightbox.com", "pod":"calico-apiserver-b4bf85fc-lt27n", "timestamp":"2025-12-16 15:28:11.491508517 +0000 UTC"}, Hostname:"srv-g2i2t.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 15:28:11.594775 containerd[1667]: 2025-12-16 15:28:11.491 [INFO][5433] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 15:28:11.594775 containerd[1667]: 2025-12-16 15:28:11.491 [INFO][5433] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 15:28:11.594775 containerd[1667]: 2025-12-16 15:28:11.491 [INFO][5433] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-g2i2t.gb1.brightbox.com' Dec 16 15:28:11.594775 containerd[1667]: 2025-12-16 15:28:11.507 [INFO][5433] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c3cdd91cf11462bcbdaa596b769bbdb11a43a326a38955183f1d29a651e8c57e" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:11.594775 containerd[1667]: 2025-12-16 15:28:11.514 [INFO][5433] ipam/ipam.go 394: Looking up existing affinities for host host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:11.594775 containerd[1667]: 2025-12-16 15:28:11.521 [INFO][5433] ipam/ipam.go 511: Trying affinity for 192.168.126.192/26 host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:11.594775 containerd[1667]: 2025-12-16 15:28:11.523 [INFO][5433] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.192/26 host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:11.594775 containerd[1667]: 2025-12-16 15:28:11.528 [INFO][5433] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.192/26 host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:11.594775 containerd[1667]: 2025-12-16 15:28:11.529 [INFO][5433] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.126.192/26 handle="k8s-pod-network.c3cdd91cf11462bcbdaa596b769bbdb11a43a326a38955183f1d29a651e8c57e" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:11.594775 containerd[1667]: 2025-12-16 15:28:11.531 [INFO][5433] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c3cdd91cf11462bcbdaa596b769bbdb11a43a326a38955183f1d29a651e8c57e Dec 16 15:28:11.594775 containerd[1667]: 2025-12-16 15:28:11.537 [INFO][5433] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.126.192/26 handle="k8s-pod-network.c3cdd91cf11462bcbdaa596b769bbdb11a43a326a38955183f1d29a651e8c57e" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:11.594775 containerd[1667]: 2025-12-16 15:28:11.557 [INFO][5433] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.126.202/26] block=192.168.126.192/26 handle="k8s-pod-network.c3cdd91cf11462bcbdaa596b769bbdb11a43a326a38955183f1d29a651e8c57e" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:11.594775 containerd[1667]: 2025-12-16 15:28:11.557 [INFO][5433] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.202/26] handle="k8s-pod-network.c3cdd91cf11462bcbdaa596b769bbdb11a43a326a38955183f1d29a651e8c57e" host="srv-g2i2t.gb1.brightbox.com" Dec 16 15:28:11.594775 containerd[1667]: 2025-12-16 15:28:11.557 [INFO][5433] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 15:28:11.594775 containerd[1667]: 2025-12-16 15:28:11.557 [INFO][5433] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.126.202/26] IPv6=[] ContainerID="c3cdd91cf11462bcbdaa596b769bbdb11a43a326a38955183f1d29a651e8c57e" HandleID="k8s-pod-network.c3cdd91cf11462bcbdaa596b769bbdb11a43a326a38955183f1d29a651e8c57e" Workload="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--b4bf85fc--lt27n-eth0" Dec 16 15:28:11.598174 containerd[1667]: 2025-12-16 15:28:11.561 [INFO][5421] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c3cdd91cf11462bcbdaa596b769bbdb11a43a326a38955183f1d29a651e8c57e" Namespace="calico-apiserver" Pod="calico-apiserver-b4bf85fc-lt27n" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--b4bf85fc--lt27n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--b4bf85fc--lt27n-eth0", GenerateName:"calico-apiserver-b4bf85fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"4428d9ef-7f5f-4ea4-93cb-e174a33d2ea2", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 15, 27, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b4bf85fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-g2i2t.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-b4bf85fc-lt27n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.202/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie570344d295", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 15:28:11.598174 containerd[1667]: 2025-12-16 15:28:11.562 [INFO][5421] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.202/32] ContainerID="c3cdd91cf11462bcbdaa596b769bbdb11a43a326a38955183f1d29a651e8c57e" Namespace="calico-apiserver" Pod="calico-apiserver-b4bf85fc-lt27n" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--b4bf85fc--lt27n-eth0" Dec 16 15:28:11.598174 containerd[1667]: 2025-12-16 15:28:11.562 [INFO][5421] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie570344d295 ContainerID="c3cdd91cf11462bcbdaa596b769bbdb11a43a326a38955183f1d29a651e8c57e" Namespace="calico-apiserver" Pod="calico-apiserver-b4bf85fc-lt27n" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--b4bf85fc--lt27n-eth0" Dec 16 15:28:11.598174 containerd[1667]: 2025-12-16 15:28:11.571 [INFO][5421] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c3cdd91cf11462bcbdaa596b769bbdb11a43a326a38955183f1d29a651e8c57e" Namespace="calico-apiserver" Pod="calico-apiserver-b4bf85fc-lt27n" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--b4bf85fc--lt27n-eth0" Dec 16 15:28:11.598174 containerd[1667]: 2025-12-16 15:28:11.572 [INFO][5421] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c3cdd91cf11462bcbdaa596b769bbdb11a43a326a38955183f1d29a651e8c57e" Namespace="calico-apiserver" Pod="calico-apiserver-b4bf85fc-lt27n" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--b4bf85fc--lt27n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--b4bf85fc--lt27n-eth0", GenerateName:"calico-apiserver-b4bf85fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"4428d9ef-7f5f-4ea4-93cb-e174a33d2ea2", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 15, 27, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b4bf85fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-g2i2t.gb1.brightbox.com", ContainerID:"c3cdd91cf11462bcbdaa596b769bbdb11a43a326a38955183f1d29a651e8c57e", Pod:"calico-apiserver-b4bf85fc-lt27n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.202/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie570344d295", MAC:"9e:52:25:3e:9a:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 15:28:11.598174 containerd[1667]: 2025-12-16 15:28:11.589 [INFO][5421] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c3cdd91cf11462bcbdaa596b769bbdb11a43a326a38955183f1d29a651e8c57e" Namespace="calico-apiserver" Pod="calico-apiserver-b4bf85fc-lt27n" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-calico--apiserver--b4bf85fc--lt27n-eth0" Dec 16 15:28:11.628000 audit[5447]: NETFILTER_CFG table=filter:135 family=2 entries=41 op=nft_register_chain pid=5447 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 15:28:11.628000 audit[5447]: SYSCALL arch=c000003e syscall=46 success=yes exit=23096 a0=3 a1=7ffef15ae840 a2=0 a3=7ffef15ae82c items=0 ppid=4801 pid=5447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:11.628000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 15:28:11.663898 containerd[1667]: time="2025-12-16T15:28:11.663189700Z" level=info msg="connecting to shim c3cdd91cf11462bcbdaa596b769bbdb11a43a326a38955183f1d29a651e8c57e" address="unix:///run/containerd/s/246408c449f5db6f7fd3029035179d37610048e75c634f29ffd9d8c25c82b80e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 15:28:11.714958 systemd[1]: Started cri-containerd-c3cdd91cf11462bcbdaa596b769bbdb11a43a326a38955183f1d29a651e8c57e.scope - libcontainer container c3cdd91cf11462bcbdaa596b769bbdb11a43a326a38955183f1d29a651e8c57e. Dec 16 15:28:11.733000 audit: BPF prog-id=265 op=LOAD Dec 16 15:28:11.734000 audit: BPF prog-id=266 op=LOAD Dec 16 15:28:11.734000 audit[5467]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=5457 pid=5467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:11.734000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333636464393163663131343632626362646161353936623736396262 Dec 16 15:28:11.734000 audit: BPF prog-id=266 op=UNLOAD Dec 16 15:28:11.734000 audit[5467]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5457 pid=5467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:11.734000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333636464393163663131343632626362646161353936623736396262 Dec 16 15:28:11.735000 audit: BPF prog-id=267 op=LOAD Dec 16 15:28:11.735000 audit[5467]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=5457 pid=5467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:11.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333636464393163663131343632626362646161353936623736396262 Dec 16 15:28:11.735000 audit: BPF prog-id=268 op=LOAD Dec 16 15:28:11.735000 audit[5467]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=5457 pid=5467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:11.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333636464393163663131343632626362646161353936623736396262 Dec 16 15:28:11.735000 audit: BPF prog-id=268 op=UNLOAD Dec 16 15:28:11.735000 audit[5467]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5457 pid=5467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:11.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333636464393163663131343632626362646161353936623736396262 Dec 16 15:28:11.735000 audit: BPF prog-id=267 op=UNLOAD Dec 16 15:28:11.735000 audit[5467]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5457 pid=5467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:11.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333636464393163663131343632626362646161353936623736396262 Dec 16 15:28:11.735000 audit: BPF prog-id=269 op=LOAD Dec 16 15:28:11.735000 audit[5467]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=5457 pid=5467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:11.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333636464393163663131343632626362646161353936623736396262 Dec 16 15:28:11.787929 systemd-networkd[1577]: cali7a3a583b38d: Gained IPv6LL Dec 16 15:28:11.794160 containerd[1667]: time="2025-12-16T15:28:11.794090367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b4bf85fc-lt27n,Uid:4428d9ef-7f5f-4ea4-93cb-e174a33d2ea2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c3cdd91cf11462bcbdaa596b769bbdb11a43a326a38955183f1d29a651e8c57e\"" Dec 16 15:28:11.796417 containerd[1667]: time="2025-12-16T15:28:11.796278058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 15:28:12.107775 systemd-networkd[1577]: cali7759207d8e8: Gained IPv6LL Dec 16 15:28:12.121705 containerd[1667]: time="2025-12-16T15:28:12.121566206Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:28:12.126861 containerd[1667]: time="2025-12-16T15:28:12.126776904Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 15:28:12.127153 containerd[1667]: time="2025-12-16T15:28:12.126867110Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 15:28:12.127935 kubelet[3007]: E1216 15:28:12.127582 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 15:28:12.127935 kubelet[3007]: E1216 15:28:12.127672 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 15:28:12.127935 kubelet[3007]: E1216 15:28:12.127803 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-b4bf85fc-lt27n_calico-apiserver(4428d9ef-7f5f-4ea4-93cb-e174a33d2ea2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 15:28:12.127935 kubelet[3007]: E1216 15:28:12.127857 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b4bf85fc-lt27n" podUID="4428d9ef-7f5f-4ea4-93cb-e174a33d2ea2" Dec 16 15:28:12.181798 kubelet[3007]: E1216 15:28:12.181744 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75675d747f-2cjwb" podUID="71c27099-4499-4f5b-8630-5d35b5c1100b" Dec 16 15:28:12.185114 kubelet[3007]: E1216 15:28:12.185067 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b4bf85fc-lt27n" podUID="4428d9ef-7f5f-4ea4-93cb-e174a33d2ea2" Dec 16 15:28:12.318000 audit[5494]: NETFILTER_CFG table=filter:136 family=2 entries=14 op=nft_register_rule pid=5494 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:28:12.318000 audit[5494]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe2d3fcb30 a2=0 a3=7ffe2d3fcb1c items=0 ppid=3127 pid=5494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:12.318000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:28:12.343000 audit[5494]: NETFILTER_CFG table=nat:137 family=2 entries=56 op=nft_register_chain pid=5494 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:28:12.343000 audit[5494]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffe2d3fcb30 a2=0 a3=7ffe2d3fcb1c items=0 ppid=3127 pid=5494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:12.343000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:28:12.619773 systemd-networkd[1577]: calie570344d295: Gained IPv6LL Dec 16 15:28:13.179351 kubelet[3007]: E1216 15:28:13.179207 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b4bf85fc-lt27n" podUID="4428d9ef-7f5f-4ea4-93cb-e174a33d2ea2" Dec 16 15:28:15.338102 containerd[1667]: time="2025-12-16T15:28:15.337909459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 15:28:15.660479 containerd[1667]: time="2025-12-16T15:28:15.660286954Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:28:15.661769 containerd[1667]: time="2025-12-16T15:28:15.661712006Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 15:28:15.661852 containerd[1667]: time="2025-12-16T15:28:15.661817136Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 15:28:15.662194 kubelet[3007]: E1216 15:28:15.662127 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 15:28:15.663490 kubelet[3007]: E1216 15:28:15.662813 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 15:28:15.663745 kubelet[3007]: E1216 15:28:15.663621 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7c8d6c5b45-zv6vg_calico-apiserver(5a7920f3-ed03-480b-921f-a7a3eaa95ad5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 15:28:15.663745 kubelet[3007]: E1216 15:28:15.663697 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8d6c5b45-zv6vg" podUID="5a7920f3-ed03-480b-921f-a7a3eaa95ad5" Dec 16 15:28:16.339124 containerd[1667]: time="2025-12-16T15:28:16.339003914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 15:28:16.709816 containerd[1667]: time="2025-12-16T15:28:16.708467876Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:28:16.711093 containerd[1667]: time="2025-12-16T15:28:16.710953309Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 15:28:16.711093 containerd[1667]: time="2025-12-16T15:28:16.710966343Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 15:28:16.711351 kubelet[3007]: E1216 15:28:16.711293 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 15:28:16.712325 kubelet[3007]: E1216 15:28:16.711902 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 15:28:16.712325 kubelet[3007]: E1216 15:28:16.712075 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7wvd4_calico-system(27e89a24-5a1a-4b44-908b-951574a9d075): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 15:28:16.713934 containerd[1667]: time="2025-12-16T15:28:16.713862160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 15:28:17.022871 containerd[1667]: time="2025-12-16T15:28:17.022688198Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:28:17.024639 containerd[1667]: time="2025-12-16T15:28:17.024583948Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 15:28:17.024998 containerd[1667]: time="2025-12-16T15:28:17.024808476Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 15:28:17.025100 kubelet[3007]: E1216 15:28:17.025040 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 15:28:17.025188 kubelet[3007]: E1216 15:28:17.025116 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 15:28:17.025260 kubelet[3007]: E1216 15:28:17.025237 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7wvd4_calico-system(27e89a24-5a1a-4b44-908b-951574a9d075): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 15:28:17.025353 kubelet[3007]: E1216 15:28:17.025295 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7wvd4" podUID="27e89a24-5a1a-4b44-908b-951574a9d075" Dec 16 15:28:18.336880 containerd[1667]: time="2025-12-16T15:28:18.336390716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 15:28:18.651290 containerd[1667]: time="2025-12-16T15:28:18.651044643Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:28:18.652503 containerd[1667]: time="2025-12-16T15:28:18.652434763Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 15:28:18.652666 containerd[1667]: time="2025-12-16T15:28:18.652467013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 15:28:18.653197 kubelet[3007]: E1216 15:28:18.652843 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 15:28:18.653197 kubelet[3007]: E1216 15:28:18.652908 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 15:28:18.653197 kubelet[3007]: E1216 15:28:18.653036 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-gjpms_calico-system(fa623572-3c69-4396-806f-a142b1ffa21a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 15:28:18.653197 kubelet[3007]: E1216 15:28:18.653090 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-gjpms" podUID="fa623572-3c69-4396-806f-a142b1ffa21a" Dec 16 15:28:20.337156 containerd[1667]: time="2025-12-16T15:28:20.336420575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 15:28:20.651598 containerd[1667]: time="2025-12-16T15:28:20.651063123Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:28:20.653108 containerd[1667]: time="2025-12-16T15:28:20.652955035Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 15:28:20.653108 containerd[1667]: time="2025-12-16T15:28:20.653015664Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 15:28:20.653442 kubelet[3007]: E1216 15:28:20.653346 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 15:28:20.654261 kubelet[3007]: E1216 15:28:20.653463 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 15:28:20.654261 kubelet[3007]: E1216 15:28:20.653719 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-c66d96545-p5cf6_calico-system(ee0bd064-b3c2-43ac-bd37-37a79e955339): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 15:28:20.656677 containerd[1667]: time="2025-12-16T15:28:20.656342502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 15:28:20.967617 containerd[1667]: time="2025-12-16T15:28:20.967409658Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:28:20.969099 containerd[1667]: time="2025-12-16T15:28:20.969041724Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 15:28:20.969239 containerd[1667]: time="2025-12-16T15:28:20.969182427Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 15:28:20.969863 kubelet[3007]: E1216 15:28:20.969501 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 15:28:20.969863 kubelet[3007]: E1216 15:28:20.969601 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 15:28:20.969863 kubelet[3007]: E1216 15:28:20.969742 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-c66d96545-p5cf6_calico-system(ee0bd064-b3c2-43ac-bd37-37a79e955339): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 15:28:20.969863 kubelet[3007]: E1216 15:28:20.969807 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c66d96545-p5cf6" podUID="ee0bd064-b3c2-43ac-bd37-37a79e955339" Dec 16 15:28:22.336942 containerd[1667]: time="2025-12-16T15:28:22.336763830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 15:28:22.661262 containerd[1667]: time="2025-12-16T15:28:22.660468568Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:28:22.663344 containerd[1667]: time="2025-12-16T15:28:22.663216265Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 15:28:22.663344 containerd[1667]: time="2025-12-16T15:28:22.663261032Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 15:28:22.663782 kubelet[3007]: E1216 15:28:22.663682 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 15:28:22.664243 kubelet[3007]: E1216 15:28:22.663812 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 15:28:22.664886 kubelet[3007]: E1216 15:28:22.664830 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-b4bf85fc-bgpbx_calico-apiserver(42b32087-b938-4963-9aa0-ab40f5c370b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 15:28:22.665088 kubelet[3007]: E1216 15:28:22.664999 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b4bf85fc-bgpbx" podUID="42b32087-b938-4963-9aa0-ab40f5c370b3" Dec 16 15:28:25.338708 containerd[1667]: time="2025-12-16T15:28:25.338609146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 15:28:25.671119 containerd[1667]: time="2025-12-16T15:28:25.670936711Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:28:25.672908 containerd[1667]: time="2025-12-16T15:28:25.672859749Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 15:28:25.673013 containerd[1667]: time="2025-12-16T15:28:25.672971351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 15:28:25.673285 kubelet[3007]: E1216 15:28:25.673213 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 15:28:25.674118 kubelet[3007]: E1216 15:28:25.673283 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 15:28:25.674118 kubelet[3007]: E1216 15:28:25.673399 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-75675d747f-2cjwb_calico-system(71c27099-4499-4f5b-8630-5d35b5c1100b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 15:28:25.674118 kubelet[3007]: E1216 15:28:25.673449 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75675d747f-2cjwb" podUID="71c27099-4499-4f5b-8630-5d35b5c1100b" Dec 16 15:28:26.337644 containerd[1667]: time="2025-12-16T15:28:26.336951922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 15:28:26.647653 containerd[1667]: time="2025-12-16T15:28:26.647391306Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:28:26.649087 containerd[1667]: time="2025-12-16T15:28:26.649033013Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 15:28:26.649177 containerd[1667]: time="2025-12-16T15:28:26.649143849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 15:28:26.649525 kubelet[3007]: E1216 15:28:26.649455 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 15:28:26.649782 kubelet[3007]: E1216 15:28:26.649559 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 15:28:26.649782 kubelet[3007]: E1216 15:28:26.649725 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-b4bf85fc-lt27n_calico-apiserver(4428d9ef-7f5f-4ea4-93cb-e174a33d2ea2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 15:28:26.649904 kubelet[3007]: E1216 15:28:26.649777 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b4bf85fc-lt27n" podUID="4428d9ef-7f5f-4ea4-93cb-e174a33d2ea2" Dec 16 15:28:27.338885 kubelet[3007]: E1216 15:28:27.337598 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8d6c5b45-zv6vg" podUID="5a7920f3-ed03-480b-921f-a7a3eaa95ad5" Dec 16 15:28:32.339848 kubelet[3007]: E1216 15:28:32.339653 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7wvd4" podUID="27e89a24-5a1a-4b44-908b-951574a9d075" Dec 16 15:28:33.338704 kubelet[3007]: E1216 15:28:33.337579 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b4bf85fc-bgpbx" podUID="42b32087-b938-4963-9aa0-ab40f5c370b3" Dec 16 15:28:33.339190 kubelet[3007]: E1216 15:28:33.339154 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-gjpms" podUID="fa623572-3c69-4396-806f-a142b1ffa21a" Dec 16 15:28:34.125268 systemd[1]: Started sshd@12-10.230.25.166:22-139.178.89.65:41048.service - OpenSSH per-connection server daemon (139.178.89.65:41048). Dec 16 15:28:34.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.25.166:22-139.178.89.65:41048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:28:34.130547 kernel: kauditd_printk_skb: 124 callbacks suppressed Dec 16 15:28:34.130662 kernel: audit: type=1130 audit(1765898914.125:782): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.25.166:22-139.178.89.65:41048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:28:34.965000 audit[5551]: USER_ACCT pid=5551 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:34.966815 sshd[5551]: Accepted publickey for core from 139.178.89.65 port 41048 ssh2: RSA SHA256:rUwtMs6AM7ga7enH7ssLkmEdit4Ilf+ScAl9FSBwKtA Dec 16 15:28:34.971124 sshd-session[5551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 15:28:34.967000 audit[5551]: CRED_ACQ pid=5551 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:34.973148 kernel: audit: type=1101 audit(1765898914.965:783): pid=5551 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:34.973229 kernel: audit: type=1103 audit(1765898914.967:784): pid=5551 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:34.980779 kernel: audit: type=1006 audit(1765898914.967:785): pid=5551 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 16 15:28:34.967000 audit[5551]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd5b13d170 a2=3 a3=0 items=0 ppid=1 pid=5551 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:34.984933 systemd-logind[1641]: New session 12 of user core. Dec 16 15:28:34.967000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 15:28:34.989045 kernel: audit: type=1300 audit(1765898914.967:785): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd5b13d170 a2=3 a3=0 items=0 ppid=1 pid=5551 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:34.989115 kernel: audit: type=1327 audit(1765898914.967:785): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 15:28:34.997787 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 15:28:35.003000 audit[5551]: USER_START pid=5551 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:35.009697 kernel: audit: type=1105 audit(1765898915.003:786): pid=5551 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:35.010000 audit[5567]: CRED_ACQ pid=5567 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:35.015647 kernel: audit: type=1103 audit(1765898915.010:787): pid=5567 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:36.010994 sshd[5567]: Connection closed by 139.178.89.65 port 41048 Dec 16 15:28:36.011402 sshd-session[5551]: pam_unix(sshd:session): session closed for user core Dec 16 15:28:36.027000 audit[5551]: USER_END pid=5551 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:36.035562 kernel: audit: type=1106 audit(1765898916.027:788): pid=5551 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:36.036171 systemd[1]: sshd@12-10.230.25.166:22-139.178.89.65:41048.service: Deactivated successfully. Dec 16 15:28:36.027000 audit[5551]: CRED_DISP pid=5551 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:36.041214 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 15:28:36.043752 kernel: audit: type=1104 audit(1765898916.027:789): pid=5551 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:36.043906 systemd-logind[1641]: Session 12 logged out. Waiting for processes to exit. Dec 16 15:28:36.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.25.166:22-139.178.89.65:41048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:28:36.047869 systemd-logind[1641]: Removed session 12. Dec 16 15:28:36.339440 kubelet[3007]: E1216 15:28:36.339375 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c66d96545-p5cf6" podUID="ee0bd064-b3c2-43ac-bd37-37a79e955339" Dec 16 15:28:38.340956 kubelet[3007]: E1216 15:28:38.338804 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75675d747f-2cjwb" podUID="71c27099-4499-4f5b-8630-5d35b5c1100b" Dec 16 15:28:38.343756 containerd[1667]: time="2025-12-16T15:28:38.340128414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 15:28:38.669694 containerd[1667]: time="2025-12-16T15:28:38.669439697Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:28:38.670952 containerd[1667]: time="2025-12-16T15:28:38.670885006Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 15:28:38.671035 containerd[1667]: time="2025-12-16T15:28:38.671005428Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 15:28:38.671325 kubelet[3007]: E1216 15:28:38.671257 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 15:28:38.671414 kubelet[3007]: E1216 15:28:38.671332 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 15:28:38.671490 kubelet[3007]: E1216 15:28:38.671456 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7c8d6c5b45-zv6vg_calico-apiserver(5a7920f3-ed03-480b-921f-a7a3eaa95ad5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 15:28:38.671596 kubelet[3007]: E1216 15:28:38.671539 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8d6c5b45-zv6vg" podUID="5a7920f3-ed03-480b-921f-a7a3eaa95ad5" Dec 16 15:28:39.338901 kubelet[3007]: E1216 15:28:39.338181 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b4bf85fc-lt27n" podUID="4428d9ef-7f5f-4ea4-93cb-e174a33d2ea2" Dec 16 15:28:41.175698 systemd[1]: Started sshd@13-10.230.25.166:22-139.178.89.65:55572.service - OpenSSH per-connection server daemon (139.178.89.65:55572). Dec 16 15:28:41.186597 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 15:28:41.186767 kernel: audit: type=1130 audit(1765898921.175:791): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.25.166:22-139.178.89.65:55572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:28:41.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.25.166:22-139.178.89.65:55572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:28:41.979000 audit[5582]: USER_ACCT pid=5582 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:41.990549 kernel: audit: type=1101 audit(1765898921.979:792): pid=5582 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:41.990721 sshd[5582]: Accepted publickey for core from 139.178.89.65 port 55572 ssh2: RSA SHA256:rUwtMs6AM7ga7enH7ssLkmEdit4Ilf+ScAl9FSBwKtA Dec 16 15:28:41.992000 audit[5582]: CRED_ACQ pid=5582 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:41.997224 sshd-session[5582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 15:28:41.998856 kernel: audit: type=1103 audit(1765898921.992:793): pid=5582 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:42.003564 kernel: audit: type=1006 audit(1765898921.992:794): pid=5582 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 16 15:28:41.992000 audit[5582]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd106059b0 a2=3 a3=0 items=0 ppid=1 pid=5582 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:42.011622 kernel: audit: type=1300 audit(1765898921.992:794): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd106059b0 a2=3 a3=0 items=0 ppid=1 pid=5582 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:41.992000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 15:28:42.020615 kernel: audit: type=1327 audit(1765898921.992:794): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 15:28:42.018943 systemd-logind[1641]: New session 13 of user core. Dec 16 15:28:42.026811 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 15:28:42.035000 audit[5582]: USER_START pid=5582 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:42.042577 kernel: audit: type=1105 audit(1765898922.035:795): pid=5582 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:42.044000 audit[5586]: CRED_ACQ pid=5586 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:42.050584 kernel: audit: type=1103 audit(1765898922.044:796): pid=5586 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:42.586612 sshd[5586]: Connection closed by 139.178.89.65 port 55572 Dec 16 15:28:42.587476 sshd-session[5582]: pam_unix(sshd:session): session closed for user core Dec 16 15:28:42.588000 audit[5582]: USER_END pid=5582 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:42.603110 systemd[1]: sshd@13-10.230.25.166:22-139.178.89.65:55572.service: Deactivated successfully. Dec 16 15:28:42.606757 kernel: audit: type=1106 audit(1765898922.588:797): pid=5582 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:42.606855 kernel: audit: type=1104 audit(1765898922.588:798): pid=5582 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:42.588000 audit[5582]: CRED_DISP pid=5582 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:42.607997 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 15:28:42.611104 systemd-logind[1641]: Session 13 logged out. Waiting for processes to exit. Dec 16 15:28:42.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.25.166:22-139.178.89.65:55572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:28:42.614706 systemd-logind[1641]: Removed session 13. Dec 16 15:28:44.339356 containerd[1667]: time="2025-12-16T15:28:44.338203272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 15:28:44.676862 containerd[1667]: time="2025-12-16T15:28:44.676407073Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:28:44.679068 containerd[1667]: time="2025-12-16T15:28:44.678704629Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 15:28:44.679068 containerd[1667]: time="2025-12-16T15:28:44.678746600Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 15:28:44.679276 kubelet[3007]: E1216 15:28:44.679168 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 15:28:44.679276 kubelet[3007]: E1216 15:28:44.679257 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 15:28:44.679966 kubelet[3007]: E1216 15:28:44.679421 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-b4bf85fc-bgpbx_calico-apiserver(42b32087-b938-4963-9aa0-ab40f5c370b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 15:28:44.679966 kubelet[3007]: E1216 15:28:44.679491 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b4bf85fc-bgpbx" podUID="42b32087-b938-4963-9aa0-ab40f5c370b3" Dec 16 15:28:46.338623 containerd[1667]: time="2025-12-16T15:28:46.337251421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 15:28:46.655175 containerd[1667]: time="2025-12-16T15:28:46.654938186Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:28:46.656454 containerd[1667]: time="2025-12-16T15:28:46.656401419Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 15:28:46.657402 kubelet[3007]: E1216 15:28:46.657045 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 15:28:46.657402 kubelet[3007]: E1216 15:28:46.657142 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 15:28:46.657402 kubelet[3007]: E1216 15:28:46.657263 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-gjpms_calico-system(fa623572-3c69-4396-806f-a142b1ffa21a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 15:28:46.664283 kubelet[3007]: E1216 15:28:46.664155 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-gjpms" podUID="fa623572-3c69-4396-806f-a142b1ffa21a" Dec 16 15:28:46.665054 containerd[1667]: time="2025-12-16T15:28:46.656507865Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 15:28:47.339732 containerd[1667]: time="2025-12-16T15:28:47.339652924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 15:28:47.660044 containerd[1667]: time="2025-12-16T15:28:47.659820880Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:28:47.661974 containerd[1667]: time="2025-12-16T15:28:47.661862085Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 15:28:47.662082 containerd[1667]: time="2025-12-16T15:28:47.661899716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 15:28:47.662616 kubelet[3007]: E1216 15:28:47.662440 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 15:28:47.662616 kubelet[3007]: E1216 15:28:47.662543 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 15:28:47.663699 kubelet[3007]: E1216 15:28:47.663365 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7wvd4_calico-system(27e89a24-5a1a-4b44-908b-951574a9d075): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 15:28:47.666978 containerd[1667]: time="2025-12-16T15:28:47.666937818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 15:28:47.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.25.166:22-139.178.89.65:55586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:28:47.749974 systemd[1]: Started sshd@14-10.230.25.166:22-139.178.89.65:55586.service - OpenSSH per-connection server daemon (139.178.89.65:55586). Dec 16 15:28:47.755770 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 15:28:47.755869 kernel: audit: type=1130 audit(1765898927.748:800): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.25.166:22-139.178.89.65:55586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:28:47.974698 containerd[1667]: time="2025-12-16T15:28:47.974490716Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:28:47.979226 containerd[1667]: time="2025-12-16T15:28:47.979171741Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 15:28:47.979493 containerd[1667]: time="2025-12-16T15:28:47.979223590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 15:28:47.980330 kubelet[3007]: E1216 15:28:47.980200 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 15:28:47.980658 kubelet[3007]: E1216 15:28:47.980624 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 15:28:47.981148 kubelet[3007]: E1216 15:28:47.981108 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7wvd4_calico-system(27e89a24-5a1a-4b44-908b-951574a9d075): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 15:28:47.981525 kubelet[3007]: E1216 15:28:47.981429 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7wvd4" podUID="27e89a24-5a1a-4b44-908b-951574a9d075" Dec 16 15:28:48.579000 audit[5605]: USER_ACCT pid=5605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:48.588764 sshd[5605]: Accepted publickey for core from 139.178.89.65 port 55586 ssh2: RSA SHA256:rUwtMs6AM7ga7enH7ssLkmEdit4Ilf+ScAl9FSBwKtA Dec 16 15:28:48.587658 sshd-session[5605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 15:28:48.585000 audit[5605]: CRED_ACQ pid=5605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:48.592097 kernel: audit: type=1101 audit(1765898928.579:801): pid=5605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:48.592197 kernel: audit: type=1103 audit(1765898928.585:802): pid=5605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:48.598624 kernel: audit: type=1006 audit(1765898928.585:803): pid=5605 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 16 15:28:48.585000 audit[5605]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe636bbad0 a2=3 a3=0 items=0 ppid=1 pid=5605 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:48.585000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 15:28:48.606984 kernel: audit: type=1300 audit(1765898928.585:803): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe636bbad0 a2=3 a3=0 items=0 ppid=1 pid=5605 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:48.607058 kernel: audit: type=1327 audit(1765898928.585:803): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 15:28:48.612695 systemd-logind[1641]: New session 14 of user core. Dec 16 15:28:48.619799 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 15:28:48.624000 audit[5605]: USER_START pid=5605 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:48.630000 audit[5608]: CRED_ACQ pid=5608 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:48.633903 kernel: audit: type=1105 audit(1765898928.624:804): pid=5605 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:48.633998 kernel: audit: type=1103 audit(1765898928.630:805): pid=5608 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:49.134373 sshd[5608]: Connection closed by 139.178.89.65 port 55586 Dec 16 15:28:49.135087 sshd-session[5605]: pam_unix(sshd:session): session closed for user core Dec 16 15:28:49.137000 audit[5605]: USER_END pid=5605 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:49.147123 systemd[1]: sshd@14-10.230.25.166:22-139.178.89.65:55586.service: Deactivated successfully. Dec 16 15:28:49.147837 kernel: audit: type=1106 audit(1765898929.137:806): pid=5605 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:49.137000 audit[5605]: CRED_DISP pid=5605 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:49.150850 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 15:28:49.154725 kernel: audit: type=1104 audit(1765898929.137:807): pid=5605 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:49.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.25.166:22-139.178.89.65:55586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:28:49.155421 systemd-logind[1641]: Session 14 logged out. Waiting for processes to exit. Dec 16 15:28:49.157281 systemd-logind[1641]: Removed session 14. Dec 16 15:28:49.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.25.166:22-139.178.89.65:55588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:28:49.299000 systemd[1]: Started sshd@15-10.230.25.166:22-139.178.89.65:55588.service - OpenSSH per-connection server daemon (139.178.89.65:55588). Dec 16 15:28:50.093000 audit[5621]: USER_ACCT pid=5621 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:50.095046 sshd[5621]: Accepted publickey for core from 139.178.89.65 port 55588 ssh2: RSA SHA256:rUwtMs6AM7ga7enH7ssLkmEdit4Ilf+ScAl9FSBwKtA Dec 16 15:28:50.094000 audit[5621]: CRED_ACQ pid=5621 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:50.094000 audit[5621]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc0a1da40 a2=3 a3=0 items=0 ppid=1 pid=5621 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:50.094000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 15:28:50.097072 sshd-session[5621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 15:28:50.106001 systemd-logind[1641]: New session 15 of user core. Dec 16 15:28:50.113773 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 15:28:50.117000 audit[5621]: USER_START pid=5621 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:50.120000 audit[5624]: CRED_ACQ pid=5624 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:50.338745 containerd[1667]: time="2025-12-16T15:28:50.338302449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 15:28:50.664416 containerd[1667]: time="2025-12-16T15:28:50.664275536Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:28:50.665670 containerd[1667]: time="2025-12-16T15:28:50.665625517Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 15:28:50.665768 containerd[1667]: time="2025-12-16T15:28:50.665729366Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 15:28:50.673801 kubelet[3007]: E1216 15:28:50.673411 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 15:28:50.673801 kubelet[3007]: E1216 15:28:50.673502 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 15:28:50.673801 kubelet[3007]: E1216 15:28:50.673682 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-75675d747f-2cjwb_calico-system(71c27099-4499-4f5b-8630-5d35b5c1100b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 15:28:50.673801 kubelet[3007]: E1216 15:28:50.673742 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75675d747f-2cjwb" podUID="71c27099-4499-4f5b-8630-5d35b5c1100b" Dec 16 15:28:50.751663 sshd[5624]: Connection closed by 139.178.89.65 port 55588 Dec 16 15:28:50.758720 sshd-session[5621]: pam_unix(sshd:session): session closed for user core Dec 16 15:28:50.762000 audit[5621]: USER_END pid=5621 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:50.763000 audit[5621]: CRED_DISP pid=5621 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:50.773038 systemd[1]: sshd@15-10.230.25.166:22-139.178.89.65:55588.service: Deactivated successfully. Dec 16 15:28:50.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.25.166:22-139.178.89.65:55588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:28:50.778103 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 15:28:50.782976 systemd-logind[1641]: Session 15 logged out. Waiting for processes to exit. Dec 16 15:28:50.784825 systemd-logind[1641]: Removed session 15. Dec 16 15:28:50.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.25.166:22-139.178.89.65:35258 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:28:50.918984 systemd[1]: Started sshd@16-10.230.25.166:22-139.178.89.65:35258.service - OpenSSH per-connection server daemon (139.178.89.65:35258). Dec 16 15:28:51.342612 kubelet[3007]: E1216 15:28:51.342480 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8d6c5b45-zv6vg" podUID="5a7920f3-ed03-480b-921f-a7a3eaa95ad5" Dec 16 15:28:51.351080 containerd[1667]: time="2025-12-16T15:28:51.349746099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 15:28:51.660142 containerd[1667]: time="2025-12-16T15:28:51.659786332Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:28:51.663866 containerd[1667]: time="2025-12-16T15:28:51.663727272Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 15:28:51.664393 containerd[1667]: time="2025-12-16T15:28:51.663813870Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 15:28:51.664685 kubelet[3007]: E1216 15:28:51.664615 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 15:28:51.664806 kubelet[3007]: E1216 15:28:51.664702 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 15:28:51.664865 kubelet[3007]: E1216 15:28:51.664834 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-c66d96545-p5cf6_calico-system(ee0bd064-b3c2-43ac-bd37-37a79e955339): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 15:28:51.667537 containerd[1667]: time="2025-12-16T15:28:51.667478006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 15:28:51.733000 audit[5633]: USER_ACCT pid=5633 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:51.736489 sshd[5633]: Accepted publickey for core from 139.178.89.65 port 35258 ssh2: RSA SHA256:rUwtMs6AM7ga7enH7ssLkmEdit4Ilf+ScAl9FSBwKtA Dec 16 15:28:51.735000 audit[5633]: CRED_ACQ pid=5633 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:51.735000 audit[5633]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff326738f0 a2=3 a3=0 items=0 ppid=1 pid=5633 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:51.735000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 15:28:51.738011 sshd-session[5633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 15:28:51.748423 systemd-logind[1641]: New session 16 of user core. Dec 16 15:28:51.756491 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 15:28:51.764000 audit[5633]: USER_START pid=5633 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:51.767000 audit[5638]: CRED_ACQ pid=5638 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:52.207235 containerd[1667]: time="2025-12-16T15:28:52.207083204Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:28:52.209271 containerd[1667]: time="2025-12-16T15:28:52.209119066Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 15:28:52.209369 containerd[1667]: time="2025-12-16T15:28:52.209189249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 15:28:52.209806 kubelet[3007]: E1216 15:28:52.209755 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 15:28:52.210252 kubelet[3007]: E1216 15:28:52.209820 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 15:28:52.210252 kubelet[3007]: E1216 15:28:52.209934 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-c66d96545-p5cf6_calico-system(ee0bd064-b3c2-43ac-bd37-37a79e955339): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 15:28:52.210252 kubelet[3007]: E1216 15:28:52.209995 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c66d96545-p5cf6" podUID="ee0bd064-b3c2-43ac-bd37-37a79e955339" Dec 16 15:28:52.381541 sshd[5638]: Connection closed by 139.178.89.65 port 35258 Dec 16 15:28:52.382654 sshd-session[5633]: pam_unix(sshd:session): session closed for user core Dec 16 15:28:52.384000 audit[5633]: USER_END pid=5633 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:52.384000 audit[5633]: CRED_DISP pid=5633 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:52.391266 systemd[1]: sshd@16-10.230.25.166:22-139.178.89.65:35258.service: Deactivated successfully. Dec 16 15:28:52.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.25.166:22-139.178.89.65:35258 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:28:52.397151 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 15:28:52.400014 systemd-logind[1641]: Session 16 logged out. Waiting for processes to exit. Dec 16 15:28:52.402613 systemd-logind[1641]: Removed session 16. Dec 16 15:28:54.336870 containerd[1667]: time="2025-12-16T15:28:54.336728926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 15:28:54.647847 containerd[1667]: time="2025-12-16T15:28:54.647288739Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:28:54.649022 containerd[1667]: time="2025-12-16T15:28:54.648975130Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 15:28:54.649122 containerd[1667]: time="2025-12-16T15:28:54.649096716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 15:28:54.649438 kubelet[3007]: E1216 15:28:54.649357 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 15:28:54.649909 kubelet[3007]: E1216 15:28:54.649457 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 15:28:54.649909 kubelet[3007]: E1216 15:28:54.649609 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-b4bf85fc-lt27n_calico-apiserver(4428d9ef-7f5f-4ea4-93cb-e174a33d2ea2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 15:28:54.649909 kubelet[3007]: E1216 15:28:54.649661 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b4bf85fc-lt27n" podUID="4428d9ef-7f5f-4ea4-93cb-e174a33d2ea2" Dec 16 15:28:55.339975 kubelet[3007]: E1216 15:28:55.339921 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b4bf85fc-bgpbx" podUID="42b32087-b938-4963-9aa0-ab40f5c370b3" Dec 16 15:28:57.552559 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 15:28:57.552762 kernel: audit: type=1130 audit(1765898937.540:827): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.25.166:22-139.178.89.65:35262 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:28:57.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.25.166:22-139.178.89.65:35262 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:28:57.542094 systemd[1]: Started sshd@17-10.230.25.166:22-139.178.89.65:35262.service - OpenSSH per-connection server daemon (139.178.89.65:35262). Dec 16 15:28:58.342328 kubelet[3007]: E1216 15:28:58.340997 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7wvd4" podUID="27e89a24-5a1a-4b44-908b-951574a9d075" Dec 16 15:28:58.347000 audit[5650]: USER_ACCT pid=5650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:58.358650 kernel: audit: type=1101 audit(1765898938.347:828): pid=5650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:58.355118 sshd-session[5650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 15:28:58.359296 sshd[5650]: Accepted publickey for core from 139.178.89.65 port 35262 ssh2: RSA SHA256:rUwtMs6AM7ga7enH7ssLkmEdit4Ilf+ScAl9FSBwKtA Dec 16 15:28:58.347000 audit[5650]: CRED_ACQ pid=5650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:58.371041 kernel: audit: type=1103 audit(1765898938.347:829): pid=5650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:58.371127 kernel: audit: type=1006 audit(1765898938.347:830): pid=5650 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 16 15:28:58.374579 kernel: audit: type=1300 audit(1765898938.347:830): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff7ec0590 a2=3 a3=0 items=0 ppid=1 pid=5650 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:58.347000 audit[5650]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff7ec0590 a2=3 a3=0 items=0 ppid=1 pid=5650 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:28:58.376945 systemd-logind[1641]: New session 17 of user core. Dec 16 15:28:58.385105 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 15:28:58.387944 kernel: audit: type=1327 audit(1765898938.347:830): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 15:28:58.347000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 15:28:58.393000 audit[5650]: USER_START pid=5650 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:58.401573 kernel: audit: type=1105 audit(1765898938.393:831): pid=5650 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:58.405000 audit[5653]: CRED_ACQ pid=5653 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:58.412561 kernel: audit: type=1103 audit(1765898938.405:832): pid=5653 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:58.910610 sshd[5653]: Connection closed by 139.178.89.65 port 35262 Dec 16 15:28:58.912820 sshd-session[5650]: pam_unix(sshd:session): session closed for user core Dec 16 15:28:58.914000 audit[5650]: USER_END pid=5650 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:58.921844 systemd-logind[1641]: Session 17 logged out. Waiting for processes to exit. Dec 16 15:28:58.925039 kernel: audit: type=1106 audit(1765898938.914:833): pid=5650 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:58.925045 systemd[1]: sshd@17-10.230.25.166:22-139.178.89.65:35262.service: Deactivated successfully. Dec 16 15:28:58.915000 audit[5650]: CRED_DISP pid=5650 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:58.928780 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 15:28:58.930813 kernel: audit: type=1104 audit(1765898938.915:834): pid=5650 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:28:58.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.25.166:22-139.178.89.65:35262 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:28:58.932995 systemd-logind[1641]: Removed session 17. Dec 16 15:28:59.336444 kubelet[3007]: E1216 15:28:59.336378 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-gjpms" podUID="fa623572-3c69-4396-806f-a142b1ffa21a" Dec 16 15:29:01.331564 containerd[1667]: time="2025-12-16T15:29:01.330756526Z" level=info msg="StopPodSandbox for \"8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8\"" Dec 16 15:29:01.353228 kubelet[3007]: E1216 15:29:01.352178 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75675d747f-2cjwb" podUID="71c27099-4499-4f5b-8630-5d35b5c1100b" Dec 16 15:29:01.620466 containerd[1667]: 2025-12-16 15:29:01.519 [WARNING][5704] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-whisker--54b6d84d89--rmmv2-eth0" Dec 16 15:29:01.620466 containerd[1667]: 2025-12-16 15:29:01.520 [INFO][5704] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" Dec 16 15:29:01.620466 containerd[1667]: 2025-12-16 15:29:01.520 [INFO][5704] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" iface="eth0" netns="" Dec 16 15:29:01.620466 containerd[1667]: 2025-12-16 15:29:01.520 [INFO][5704] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" Dec 16 15:29:01.620466 containerd[1667]: 2025-12-16 15:29:01.520 [INFO][5704] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" Dec 16 15:29:01.620466 containerd[1667]: 2025-12-16 15:29:01.594 [INFO][5711] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" HandleID="k8s-pod-network.8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" Workload="srv--g2i2t.gb1.brightbox.com-k8s-whisker--54b6d84d89--rmmv2-eth0" Dec 16 15:29:01.620466 containerd[1667]: 2025-12-16 15:29:01.596 [INFO][5711] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 15:29:01.620466 containerd[1667]: 2025-12-16 15:29:01.596 [INFO][5711] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 15:29:01.620466 containerd[1667]: 2025-12-16 15:29:01.611 [WARNING][5711] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" HandleID="k8s-pod-network.8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" Workload="srv--g2i2t.gb1.brightbox.com-k8s-whisker--54b6d84d89--rmmv2-eth0" Dec 16 15:29:01.620466 containerd[1667]: 2025-12-16 15:29:01.611 [INFO][5711] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" HandleID="k8s-pod-network.8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" Workload="srv--g2i2t.gb1.brightbox.com-k8s-whisker--54b6d84d89--rmmv2-eth0" Dec 16 15:29:01.620466 containerd[1667]: 2025-12-16 15:29:01.613 [INFO][5711] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 15:29:01.620466 containerd[1667]: 2025-12-16 15:29:01.616 [INFO][5704] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" Dec 16 15:29:01.620466 containerd[1667]: time="2025-12-16T15:29:01.620451165Z" level=info msg="TearDown network for sandbox \"8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8\" successfully" Dec 16 15:29:01.622900 containerd[1667]: time="2025-12-16T15:29:01.620737020Z" level=info msg="StopPodSandbox for \"8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8\" returns successfully" Dec 16 15:29:01.634143 containerd[1667]: time="2025-12-16T15:29:01.633916501Z" level=info msg="RemovePodSandbox for \"8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8\"" Dec 16 15:29:01.634143 containerd[1667]: time="2025-12-16T15:29:01.634012016Z" level=info msg="Forcibly stopping sandbox \"8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8\"" Dec 16 15:29:01.768408 containerd[1667]: 2025-12-16 15:29:01.708 [WARNING][5726] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" WorkloadEndpoint="srv--g2i2t.gb1.brightbox.com-k8s-whisker--54b6d84d89--rmmv2-eth0" Dec 16 15:29:01.768408 containerd[1667]: 2025-12-16 15:29:01.708 [INFO][5726] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" Dec 16 15:29:01.768408 containerd[1667]: 2025-12-16 15:29:01.708 [INFO][5726] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" iface="eth0" netns="" Dec 16 15:29:01.768408 containerd[1667]: 2025-12-16 15:29:01.709 [INFO][5726] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" Dec 16 15:29:01.768408 containerd[1667]: 2025-12-16 15:29:01.709 [INFO][5726] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" Dec 16 15:29:01.768408 containerd[1667]: 2025-12-16 15:29:01.749 [INFO][5733] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" HandleID="k8s-pod-network.8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" Workload="srv--g2i2t.gb1.brightbox.com-k8s-whisker--54b6d84d89--rmmv2-eth0" Dec 16 15:29:01.768408 containerd[1667]: 2025-12-16 15:29:01.750 [INFO][5733] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 15:29:01.768408 containerd[1667]: 2025-12-16 15:29:01.750 [INFO][5733] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 15:29:01.768408 containerd[1667]: 2025-12-16 15:29:01.761 [WARNING][5733] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" HandleID="k8s-pod-network.8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" Workload="srv--g2i2t.gb1.brightbox.com-k8s-whisker--54b6d84d89--rmmv2-eth0" Dec 16 15:29:01.768408 containerd[1667]: 2025-12-16 15:29:01.761 [INFO][5733] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" HandleID="k8s-pod-network.8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" Workload="srv--g2i2t.gb1.brightbox.com-k8s-whisker--54b6d84d89--rmmv2-eth0" Dec 16 15:29:01.768408 containerd[1667]: 2025-12-16 15:29:01.763 [INFO][5733] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 15:29:01.768408 containerd[1667]: 2025-12-16 15:29:01.765 [INFO][5726] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8" Dec 16 15:29:01.768408 containerd[1667]: time="2025-12-16T15:29:01.768083626Z" level=info msg="TearDown network for sandbox \"8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8\" successfully" Dec 16 15:29:01.780820 containerd[1667]: time="2025-12-16T15:29:01.780271856Z" level=info msg="Ensure that sandbox 8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8 in task-service has been cleanup successfully" Dec 16 15:29:01.792719 containerd[1667]: time="2025-12-16T15:29:01.792660778Z" level=info msg="RemovePodSandbox \"8fde2cbaa78551577215eac52dba41cfcd6c9ac03b58fdedc47a4f3cde8b58f8\" returns successfully" Dec 16 15:29:03.343131 kubelet[3007]: E1216 15:29:03.342952 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c66d96545-p5cf6" podUID="ee0bd064-b3c2-43ac-bd37-37a79e955339" Dec 16 15:29:04.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.25.166:22-139.178.89.65:53234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:29:04.079708 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 15:29:04.079786 kernel: audit: type=1130 audit(1765898944.072:836): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.25.166:22-139.178.89.65:53234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:29:04.072607 systemd[1]: Started sshd@18-10.230.25.166:22-139.178.89.65:53234.service - OpenSSH per-connection server daemon (139.178.89.65:53234). Dec 16 15:29:04.337108 kubelet[3007]: E1216 15:29:04.336896 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8d6c5b45-zv6vg" podUID="5a7920f3-ed03-480b-921f-a7a3eaa95ad5" Dec 16 15:29:04.865000 audit[5740]: USER_ACCT pid=5740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:04.867672 sshd[5740]: Accepted publickey for core from 139.178.89.65 port 53234 ssh2: RSA SHA256:rUwtMs6AM7ga7enH7ssLkmEdit4Ilf+ScAl9FSBwKtA Dec 16 15:29:04.870370 sshd-session[5740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 15:29:04.871842 kernel: audit: type=1101 audit(1765898944.865:837): pid=5740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:04.871946 kernel: audit: type=1103 audit(1765898944.867:838): pid=5740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:04.867000 audit[5740]: CRED_ACQ pid=5740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:04.878335 kernel: audit: type=1006 audit(1765898944.867:839): pid=5740 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 16 15:29:04.882177 kernel: audit: type=1300 audit(1765898944.867:839): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1db2f940 a2=3 a3=0 items=0 ppid=1 pid=5740 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:29:04.887649 kernel: audit: type=1327 audit(1765898944.867:839): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 15:29:04.867000 audit[5740]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1db2f940 a2=3 a3=0 items=0 ppid=1 pid=5740 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:29:04.867000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 15:29:04.891182 systemd-logind[1641]: New session 18 of user core. Dec 16 15:29:04.899792 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 15:29:04.904000 audit[5740]: USER_START pid=5740 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:04.911569 kernel: audit: type=1105 audit(1765898944.904:840): pid=5740 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:04.912321 kernel: audit: type=1103 audit(1765898944.911:841): pid=5743 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:04.911000 audit[5743]: CRED_ACQ pid=5743 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:05.441199 sshd[5743]: Connection closed by 139.178.89.65 port 53234 Dec 16 15:29:05.440902 sshd-session[5740]: pam_unix(sshd:session): session closed for user core Dec 16 15:29:05.443000 audit[5740]: USER_END pid=5740 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:05.448354 systemd[1]: sshd@18-10.230.25.166:22-139.178.89.65:53234.service: Deactivated successfully. Dec 16 15:29:05.450809 kernel: audit: type=1106 audit(1765898945.443:842): pid=5740 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:05.450898 kernel: audit: type=1104 audit(1765898945.443:843): pid=5740 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:05.443000 audit[5740]: CRED_DISP pid=5740 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:05.453591 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 15:29:05.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.25.166:22-139.178.89.65:53234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:29:05.461252 systemd-logind[1641]: Session 18 logged out. Waiting for processes to exit. Dec 16 15:29:05.464059 systemd-logind[1641]: Removed session 18. Dec 16 15:29:07.338629 kubelet[3007]: E1216 15:29:07.338051 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b4bf85fc-lt27n" podUID="4428d9ef-7f5f-4ea4-93cb-e174a33d2ea2" Dec 16 15:29:07.340290 kubelet[3007]: E1216 15:29:07.338548 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b4bf85fc-bgpbx" podUID="42b32087-b938-4963-9aa0-ab40f5c370b3" Dec 16 15:29:10.337458 kubelet[3007]: E1216 15:29:10.337364 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-gjpms" podUID="fa623572-3c69-4396-806f-a142b1ffa21a" Dec 16 15:29:10.616308 systemd[1]: Started sshd@19-10.230.25.166:22-139.178.89.65:55054.service - OpenSSH per-connection server daemon (139.178.89.65:55054). Dec 16 15:29:10.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.25.166:22-139.178.89.65:55054 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:29:10.621676 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 15:29:10.621878 kernel: audit: type=1130 audit(1765898950.616:845): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.25.166:22-139.178.89.65:55054 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:29:11.428000 audit[5759]: USER_ACCT pid=5759 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:11.429974 sshd[5759]: Accepted publickey for core from 139.178.89.65 port 55054 ssh2: RSA SHA256:rUwtMs6AM7ga7enH7ssLkmEdit4Ilf+ScAl9FSBwKtA Dec 16 15:29:11.432039 sshd-session[5759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 15:29:11.430000 audit[5759]: CRED_ACQ pid=5759 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:11.436248 kernel: audit: type=1101 audit(1765898951.428:846): pid=5759 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:11.436343 kernel: audit: type=1103 audit(1765898951.430:847): pid=5759 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:11.440477 kernel: audit: type=1006 audit(1765898951.430:848): pid=5759 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Dec 16 15:29:11.430000 audit[5759]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe924649c0 a2=3 a3=0 items=0 ppid=1 pid=5759 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:29:11.444204 kernel: audit: type=1300 audit(1765898951.430:848): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe924649c0 a2=3 a3=0 items=0 ppid=1 pid=5759 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:29:11.430000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 15:29:11.448768 kernel: audit: type=1327 audit(1765898951.430:848): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 15:29:11.447957 systemd-logind[1641]: New session 19 of user core. Dec 16 15:29:11.456023 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 15:29:11.461000 audit[5759]: USER_START pid=5759 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:11.472437 kernel: audit: type=1105 audit(1765898951.461:849): pid=5759 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:11.472594 kernel: audit: type=1103 audit(1765898951.465:850): pid=5762 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:11.465000 audit[5762]: CRED_ACQ pid=5762 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:11.976029 sshd[5762]: Connection closed by 139.178.89.65 port 55054 Dec 16 15:29:11.978192 sshd-session[5759]: pam_unix(sshd:session): session closed for user core Dec 16 15:29:11.981000 audit[5759]: USER_END pid=5759 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:11.991647 kernel: audit: type=1106 audit(1765898951.981:851): pid=5759 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:11.986000 audit[5759]: CRED_DISP pid=5759 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:11.996036 systemd[1]: sshd@19-10.230.25.166:22-139.178.89.65:55054.service: Deactivated successfully. Dec 16 15:29:11.999926 kernel: audit: type=1104 audit(1765898951.986:852): pid=5759 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:11.999405 systemd-logind[1641]: Session 19 logged out. Waiting for processes to exit. Dec 16 15:29:12.001232 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 15:29:11.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.25.166:22-139.178.89.65:55054 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:29:12.006971 systemd-logind[1641]: Removed session 19. Dec 16 15:29:12.137944 systemd[1]: Started sshd@20-10.230.25.166:22-139.178.89.65:55068.service - OpenSSH per-connection server daemon (139.178.89.65:55068). Dec 16 15:29:12.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.25.166:22-139.178.89.65:55068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:29:12.337708 kubelet[3007]: E1216 15:29:12.337612 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7wvd4" podUID="27e89a24-5a1a-4b44-908b-951574a9d075" Dec 16 15:29:12.983000 audit[5774]: USER_ACCT pid=5774 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:12.984571 sshd[5774]: Accepted publickey for core from 139.178.89.65 port 55068 ssh2: RSA SHA256:rUwtMs6AM7ga7enH7ssLkmEdit4Ilf+ScAl9FSBwKtA Dec 16 15:29:12.985000 audit[5774]: CRED_ACQ pid=5774 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:12.985000 audit[5774]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdbd764c30 a2=3 a3=0 items=0 ppid=1 pid=5774 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:29:12.985000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 15:29:12.987694 sshd-session[5774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 15:29:12.996764 systemd-logind[1641]: New session 20 of user core. Dec 16 15:29:13.006862 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 15:29:13.012000 audit[5774]: USER_START pid=5774 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:13.015000 audit[5777]: CRED_ACQ pid=5777 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:13.342504 kubelet[3007]: E1216 15:29:13.342385 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75675d747f-2cjwb" podUID="71c27099-4499-4f5b-8630-5d35b5c1100b" Dec 16 15:29:13.822561 sshd[5777]: Connection closed by 139.178.89.65 port 55068 Dec 16 15:29:13.834059 sshd-session[5774]: pam_unix(sshd:session): session closed for user core Dec 16 15:29:13.843000 audit[5774]: USER_END pid=5774 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:13.844000 audit[5774]: CRED_DISP pid=5774 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:13.848121 systemd[1]: sshd@20-10.230.25.166:22-139.178.89.65:55068.service: Deactivated successfully. Dec 16 15:29:13.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.25.166:22-139.178.89.65:55068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:29:13.851163 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 15:29:13.853228 systemd-logind[1641]: Session 20 logged out. Waiting for processes to exit. Dec 16 15:29:13.856252 systemd-logind[1641]: Removed session 20. Dec 16 15:29:13.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.25.166:22-139.178.89.65:55072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:29:13.979535 systemd[1]: Started sshd@21-10.230.25.166:22-139.178.89.65:55072.service - OpenSSH per-connection server daemon (139.178.89.65:55072). Dec 16 15:29:14.779000 audit[5787]: USER_ACCT pid=5787 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:14.780634 sshd[5787]: Accepted publickey for core from 139.178.89.65 port 55072 ssh2: RSA SHA256:rUwtMs6AM7ga7enH7ssLkmEdit4Ilf+ScAl9FSBwKtA Dec 16 15:29:14.780000 audit[5787]: CRED_ACQ pid=5787 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:14.780000 audit[5787]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe63e01020 a2=3 a3=0 items=0 ppid=1 pid=5787 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:29:14.780000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 15:29:14.782090 sshd-session[5787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 15:29:14.791082 systemd-logind[1641]: New session 21 of user core. Dec 16 15:29:14.800840 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 15:29:14.805000 audit[5787]: USER_START pid=5787 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:14.808000 audit[5790]: CRED_ACQ pid=5790 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:15.969000 audit[5800]: NETFILTER_CFG table=filter:138 family=2 entries=26 op=nft_register_rule pid=5800 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:29:15.976692 kernel: kauditd_printk_skb: 20 callbacks suppressed Dec 16 15:29:15.977621 kernel: audit: type=1325 audit(1765898955.969:869): table=filter:138 family=2 entries=26 op=nft_register_rule pid=5800 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:29:15.969000 audit[5800]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc981f0130 a2=0 a3=7ffc981f011c items=0 ppid=3127 pid=5800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:29:15.990567 kernel: audit: type=1300 audit(1765898955.969:869): arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc981f0130 a2=0 a3=7ffc981f011c items=0 ppid=3127 pid=5800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:29:15.969000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:29:15.991000 audit[5800]: NETFILTER_CFG table=nat:139 family=2 entries=20 op=nft_register_rule pid=5800 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:29:15.996094 kernel: audit: type=1327 audit(1765898955.969:869): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:29:15.996199 kernel: audit: type=1325 audit(1765898955.991:870): table=nat:139 family=2 entries=20 op=nft_register_rule pid=5800 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:29:15.991000 audit[5800]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc981f0130 a2=0 a3=0 items=0 ppid=3127 pid=5800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:29:16.007185 kernel: audit: type=1300 audit(1765898955.991:870): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc981f0130 a2=0 a3=0 items=0 ppid=3127 pid=5800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:29:16.007349 kernel: audit: type=1327 audit(1765898955.991:870): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:29:15.991000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:29:16.077383 sshd[5790]: Connection closed by 139.178.89.65 port 55072 Dec 16 15:29:16.076443 sshd-session[5787]: pam_unix(sshd:session): session closed for user core Dec 16 15:29:16.078000 audit[5787]: USER_END pid=5787 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:16.085671 kernel: audit: type=1106 audit(1765898956.078:871): pid=5787 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:16.085000 audit[5787]: CRED_DISP pid=5787 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:16.092631 kernel: audit: type=1104 audit(1765898956.085:872): pid=5787 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:16.095486 systemd[1]: sshd@21-10.230.25.166:22-139.178.89.65:55072.service: Deactivated successfully. Dec 16 15:29:16.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.25.166:22-139.178.89.65:55072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:29:16.101540 kernel: audit: type=1131 audit(1765898956.095:873): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.25.166:22-139.178.89.65:55072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:29:16.104186 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 15:29:16.106365 systemd-logind[1641]: Session 21 logged out. Waiting for processes to exit. Dec 16 15:29:16.108764 systemd-logind[1641]: Removed session 21. Dec 16 15:29:16.233361 systemd[1]: Started sshd@22-10.230.25.166:22-139.178.89.65:55078.service - OpenSSH per-connection server daemon (139.178.89.65:55078). Dec 16 15:29:16.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.25.166:22-139.178.89.65:55078 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:29:16.240559 kernel: audit: type=1130 audit(1765898956.233:874): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.25.166:22-139.178.89.65:55078 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:29:16.339088 kubelet[3007]: E1216 15:29:16.338025 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8d6c5b45-zv6vg" podUID="5a7920f3-ed03-480b-921f-a7a3eaa95ad5" Dec 16 15:29:17.039000 audit[5805]: USER_ACCT pid=5805 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:17.041329 sshd[5805]: Accepted publickey for core from 139.178.89.65 port 55078 ssh2: RSA SHA256:rUwtMs6AM7ga7enH7ssLkmEdit4Ilf+ScAl9FSBwKtA Dec 16 15:29:17.043000 audit[5805]: CRED_ACQ pid=5805 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:17.043000 audit[5805]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff215474c0 a2=3 a3=0 items=0 ppid=1 pid=5805 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:29:17.043000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 15:29:17.040000 audit[5809]: NETFILTER_CFG table=filter:140 family=2 entries=38 op=nft_register_rule pid=5809 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:29:17.040000 audit[5809]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe0962d8a0 a2=0 a3=7ffe0962d88c items=0 ppid=3127 pid=5809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:29:17.040000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:29:17.045863 sshd-session[5805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 15:29:17.055000 audit[5809]: NETFILTER_CFG table=nat:141 family=2 entries=20 op=nft_register_rule pid=5809 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:29:17.055000 audit[5809]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe0962d8a0 a2=0 a3=0 items=0 ppid=3127 pid=5809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:29:17.055000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:29:17.056660 systemd-logind[1641]: New session 22 of user core. Dec 16 15:29:17.065899 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 15:29:17.072000 audit[5805]: USER_START pid=5805 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:17.076000 audit[5810]: CRED_ACQ pid=5810 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:17.342020 kubelet[3007]: E1216 15:29:17.341352 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c66d96545-p5cf6" podUID="ee0bd064-b3c2-43ac-bd37-37a79e955339" Dec 16 15:29:17.858746 sshd[5810]: Connection closed by 139.178.89.65 port 55078 Dec 16 15:29:17.858235 sshd-session[5805]: pam_unix(sshd:session): session closed for user core Dec 16 15:29:17.859000 audit[5805]: USER_END pid=5805 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:17.860000 audit[5805]: CRED_DISP pid=5805 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:17.864227 systemd[1]: sshd@22-10.230.25.166:22-139.178.89.65:55078.service: Deactivated successfully. Dec 16 15:29:17.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.25.166:22-139.178.89.65:55078 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:29:17.867302 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 15:29:17.869467 systemd-logind[1641]: Session 22 logged out. Waiting for processes to exit. Dec 16 15:29:17.871966 systemd-logind[1641]: Removed session 22. Dec 16 15:29:18.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.25.166:22-139.178.89.65:55086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:29:18.024096 systemd[1]: Started sshd@23-10.230.25.166:22-139.178.89.65:55086.service - OpenSSH per-connection server daemon (139.178.89.65:55086). Dec 16 15:29:18.830000 audit[5820]: USER_ACCT pid=5820 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:18.832346 sshd[5820]: Accepted publickey for core from 139.178.89.65 port 55086 ssh2: RSA SHA256:rUwtMs6AM7ga7enH7ssLkmEdit4Ilf+ScAl9FSBwKtA Dec 16 15:29:18.832000 audit[5820]: CRED_ACQ pid=5820 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:18.832000 audit[5820]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1f2819d0 a2=3 a3=0 items=0 ppid=1 pid=5820 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:29:18.832000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 15:29:18.835375 sshd-session[5820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 15:29:18.843282 systemd-logind[1641]: New session 23 of user core. Dec 16 15:29:18.853786 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 15:29:18.857000 audit[5820]: USER_START pid=5820 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:18.859000 audit[5823]: CRED_ACQ pid=5823 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:19.377204 sshd[5823]: Connection closed by 139.178.89.65 port 55086 Dec 16 15:29:19.378154 sshd-session[5820]: pam_unix(sshd:session): session closed for user core Dec 16 15:29:19.380000 audit[5820]: USER_END pid=5820 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:19.380000 audit[5820]: CRED_DISP pid=5820 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:19.384659 systemd[1]: sshd@23-10.230.25.166:22-139.178.89.65:55086.service: Deactivated successfully. Dec 16 15:29:19.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.25.166:22-139.178.89.65:55086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:29:19.388410 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 15:29:19.391248 systemd-logind[1641]: Session 23 logged out. Waiting for processes to exit. Dec 16 15:29:19.393170 systemd-logind[1641]: Removed session 23. Dec 16 15:29:22.336780 kubelet[3007]: E1216 15:29:22.336700 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b4bf85fc-lt27n" podUID="4428d9ef-7f5f-4ea4-93cb-e174a33d2ea2" Dec 16 15:29:22.338430 kubelet[3007]: E1216 15:29:22.337058 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b4bf85fc-bgpbx" podUID="42b32087-b938-4963-9aa0-ab40f5c370b3" Dec 16 15:29:22.338430 kubelet[3007]: E1216 15:29:22.337164 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-gjpms" podUID="fa623572-3c69-4396-806f-a142b1ffa21a" Dec 16 15:29:24.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.230.25.166:22-139.178.89.65:56538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:29:24.537890 systemd[1]: Started sshd@24-10.230.25.166:22-139.178.89.65:56538.service - OpenSSH per-connection server daemon (139.178.89.65:56538). Dec 16 15:29:24.541481 kernel: kauditd_printk_skb: 27 callbacks suppressed Dec 16 15:29:24.541554 kernel: audit: type=1130 audit(1765898964.536:894): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.230.25.166:22-139.178.89.65:56538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:29:25.327000 audit[5835]: USER_ACCT pid=5835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:25.329653 sshd[5835]: Accepted publickey for core from 139.178.89.65 port 56538 ssh2: RSA SHA256:rUwtMs6AM7ga7enH7ssLkmEdit4Ilf+ScAl9FSBwKtA Dec 16 15:29:25.334923 kernel: audit: type=1101 audit(1765898965.327:895): pid=5835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:25.334646 sshd-session[5835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 15:29:25.332000 audit[5835]: CRED_ACQ pid=5835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:25.342762 kubelet[3007]: E1216 15:29:25.342712 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7wvd4" podUID="27e89a24-5a1a-4b44-908b-951574a9d075" Dec 16 15:29:25.344720 kernel: audit: type=1103 audit(1765898965.332:896): pid=5835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:25.350541 kernel: audit: type=1006 audit(1765898965.332:897): pid=5835 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 16 15:29:25.332000 audit[5835]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff63489920 a2=3 a3=0 items=0 ppid=1 pid=5835 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:29:25.332000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 15:29:25.358941 kernel: audit: type=1300 audit(1765898965.332:897): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff63489920 a2=3 a3=0 items=0 ppid=1 pid=5835 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:29:25.358996 kernel: audit: type=1327 audit(1765898965.332:897): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 15:29:25.364501 systemd-logind[1641]: New session 24 of user core. Dec 16 15:29:25.368793 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 15:29:25.373000 audit[5835]: USER_START pid=5835 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:25.381573 kernel: audit: type=1105 audit(1765898965.373:898): pid=5835 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:25.382000 audit[5844]: CRED_ACQ pid=5844 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:25.388556 kernel: audit: type=1103 audit(1765898965.382:899): pid=5844 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:25.884113 sshd[5844]: Connection closed by 139.178.89.65 port 56538 Dec 16 15:29:25.885634 sshd-session[5835]: pam_unix(sshd:session): session closed for user core Dec 16 15:29:25.889000 audit[5835]: USER_END pid=5835 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:25.899637 kernel: audit: type=1106 audit(1765898965.889:900): pid=5835 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:25.901447 systemd[1]: sshd@24-10.230.25.166:22-139.178.89.65:56538.service: Deactivated successfully. Dec 16 15:29:25.889000 audit[5835]: CRED_DISP pid=5835 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:25.907737 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 15:29:25.908623 kernel: audit: type=1104 audit(1765898965.889:901): pid=5835 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:25.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.230.25.166:22-139.178.89.65:56538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:29:25.912311 systemd-logind[1641]: Session 24 logged out. Waiting for processes to exit. Dec 16 15:29:25.916000 audit[5856]: NETFILTER_CFG table=filter:142 family=2 entries=26 op=nft_register_rule pid=5856 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:29:25.916000 audit[5856]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd89a24150 a2=0 a3=7ffd89a2413c items=0 ppid=3127 pid=5856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:29:25.916000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:29:25.917279 systemd-logind[1641]: Removed session 24. Dec 16 15:29:25.928000 audit[5856]: NETFILTER_CFG table=nat:143 family=2 entries=104 op=nft_register_chain pid=5856 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 15:29:25.928000 audit[5856]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd89a24150 a2=0 a3=7ffd89a2413c items=0 ppid=3127 pid=5856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:29:25.928000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 15:29:26.336544 kubelet[3007]: E1216 15:29:26.336465 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75675d747f-2cjwb" podUID="71c27099-4499-4f5b-8630-5d35b5c1100b" Dec 16 15:29:29.337864 containerd[1667]: time="2025-12-16T15:29:29.337730597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 15:29:29.650243 containerd[1667]: time="2025-12-16T15:29:29.649882197Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:29:29.652262 containerd[1667]: time="2025-12-16T15:29:29.652140790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 15:29:29.652399 containerd[1667]: time="2025-12-16T15:29:29.652141413Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 15:29:29.654020 kubelet[3007]: E1216 15:29:29.652866 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 15:29:29.654020 kubelet[3007]: E1216 15:29:29.653006 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 15:29:29.654020 kubelet[3007]: E1216 15:29:29.653185 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7c8d6c5b45-zv6vg_calico-apiserver(5a7920f3-ed03-480b-921f-a7a3eaa95ad5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 15:29:29.654020 kubelet[3007]: E1216 15:29:29.653249 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8d6c5b45-zv6vg" podUID="5a7920f3-ed03-480b-921f-a7a3eaa95ad5" Dec 16 15:29:31.058292 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 15:29:31.058569 kernel: audit: type=1130 audit(1765898971.046:905): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.230.25.166:22-139.178.89.65:57420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:29:31.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.230.25.166:22-139.178.89.65:57420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:29:31.048163 systemd[1]: Started sshd@25-10.230.25.166:22-139.178.89.65:57420.service - OpenSSH per-connection server daemon (139.178.89.65:57420). Dec 16 15:29:31.356668 kubelet[3007]: E1216 15:29:31.356591 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c66d96545-p5cf6" podUID="ee0bd064-b3c2-43ac-bd37-37a79e955339" Dec 16 15:29:31.920000 audit[5884]: USER_ACCT pid=5884 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:31.931733 kernel: audit: type=1101 audit(1765898971.920:906): pid=5884 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:31.931838 sshd[5884]: Accepted publickey for core from 139.178.89.65 port 57420 ssh2: RSA SHA256:rUwtMs6AM7ga7enH7ssLkmEdit4Ilf+ScAl9FSBwKtA Dec 16 15:29:31.926000 audit[5884]: CRED_ACQ pid=5884 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:31.931176 sshd-session[5884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 15:29:31.941242 kernel: audit: type=1103 audit(1765898971.926:907): pid=5884 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:31.948561 kernel: audit: type=1006 audit(1765898971.926:908): pid=5884 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 16 15:29:31.926000 audit[5884]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff9a6c4780 a2=3 a3=0 items=0 ppid=1 pid=5884 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:29:31.961333 kernel: audit: type=1300 audit(1765898971.926:908): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff9a6c4780 a2=3 a3=0 items=0 ppid=1 pid=5884 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:29:31.960494 systemd-logind[1641]: New session 25 of user core. Dec 16 15:29:31.926000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 15:29:31.967577 kernel: audit: type=1327 audit(1765898971.926:908): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 15:29:31.967967 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 15:29:31.981753 kernel: audit: type=1105 audit(1765898971.973:909): pid=5884 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:31.973000 audit[5884]: USER_START pid=5884 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:31.981000 audit[5889]: CRED_ACQ pid=5889 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:31.989236 kernel: audit: type=1103 audit(1765898971.981:910): pid=5889 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:32.572542 sshd[5889]: Connection closed by 139.178.89.65 port 57420 Dec 16 15:29:32.570883 sshd-session[5884]: pam_unix(sshd:session): session closed for user core Dec 16 15:29:32.572000 audit[5884]: USER_END pid=5884 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:32.585636 kernel: audit: type=1106 audit(1765898972.572:911): pid=5884 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:32.572000 audit[5884]: CRED_DISP pid=5884 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:32.589249 systemd[1]: sshd@25-10.230.25.166:22-139.178.89.65:57420.service: Deactivated successfully. Dec 16 15:29:32.591945 kernel: audit: type=1104 audit(1765898972.572:912): pid=5884 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:32.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.230.25.166:22-139.178.89.65:57420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:29:32.594034 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 15:29:32.596081 systemd-logind[1641]: Session 25 logged out. Waiting for processes to exit. Dec 16 15:29:32.598591 systemd-logind[1641]: Removed session 25. Dec 16 15:29:33.337912 kubelet[3007]: E1216 15:29:33.337773 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b4bf85fc-lt27n" podUID="4428d9ef-7f5f-4ea4-93cb-e174a33d2ea2" Dec 16 15:29:34.336721 containerd[1667]: time="2025-12-16T15:29:34.336432632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 15:29:34.647868 containerd[1667]: time="2025-12-16T15:29:34.647682015Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:29:34.649349 containerd[1667]: time="2025-12-16T15:29:34.649291218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 15:29:34.649436 containerd[1667]: time="2025-12-16T15:29:34.649297544Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 15:29:34.649856 kubelet[3007]: E1216 15:29:34.649785 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 15:29:34.650351 kubelet[3007]: E1216 15:29:34.649880 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 15:29:34.650427 kubelet[3007]: E1216 15:29:34.650281 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-gjpms_calico-system(fa623572-3c69-4396-806f-a142b1ffa21a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 15:29:34.650538 kubelet[3007]: E1216 15:29:34.650460 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-gjpms" podUID="fa623572-3c69-4396-806f-a142b1ffa21a" Dec 16 15:29:37.339839 containerd[1667]: time="2025-12-16T15:29:37.339774889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 15:29:37.730184 containerd[1667]: time="2025-12-16T15:29:37.729834725Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:29:37.731939 containerd[1667]: time="2025-12-16T15:29:37.731807766Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 15:29:37.732591 containerd[1667]: time="2025-12-16T15:29:37.731937323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 15:29:37.732658 kubelet[3007]: E1216 15:29:37.732192 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 15:29:37.732658 kubelet[3007]: E1216 15:29:37.732265 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 15:29:37.732658 kubelet[3007]: E1216 15:29:37.732389 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-b4bf85fc-bgpbx_calico-apiserver(42b32087-b938-4963-9aa0-ab40f5c370b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 15:29:37.732658 kubelet[3007]: E1216 15:29:37.732454 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b4bf85fc-bgpbx" podUID="42b32087-b938-4963-9aa0-ab40f5c370b3" Dec 16 15:29:37.739433 systemd[1]: Started sshd@26-10.230.25.166:22-139.178.89.65:57424.service - OpenSSH per-connection server daemon (139.178.89.65:57424). Dec 16 15:29:37.755001 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 15:29:37.755140 kernel: audit: type=1130 audit(1765898977.739:914): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.230.25.166:22-139.178.89.65:57424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:29:37.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.230.25.166:22-139.178.89.65:57424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:29:38.339009 containerd[1667]: time="2025-12-16T15:29:38.338782844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 15:29:38.560000 audit[5908]: USER_ACCT pid=5908 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:38.570094 sshd[5908]: Accepted publickey for core from 139.178.89.65 port 57424 ssh2: RSA SHA256:rUwtMs6AM7ga7enH7ssLkmEdit4Ilf+ScAl9FSBwKtA Dec 16 15:29:38.571048 kernel: audit: type=1101 audit(1765898978.560:915): pid=5908 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:38.572000 audit[5908]: CRED_ACQ pid=5908 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:38.573462 sshd-session[5908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 15:29:38.579176 kernel: audit: type=1103 audit(1765898978.572:916): pid=5908 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:38.579284 kernel: audit: type=1006 audit(1765898978.572:917): pid=5908 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 16 15:29:38.572000 audit[5908]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeab09bad0 a2=3 a3=0 items=0 ppid=1 pid=5908 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:29:38.583040 kernel: audit: type=1300 audit(1765898978.572:917): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeab09bad0 a2=3 a3=0 items=0 ppid=1 pid=5908 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 15:29:38.586941 kernel: audit: type=1327 audit(1765898978.572:917): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 15:29:38.572000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 15:29:38.597810 systemd-logind[1641]: New session 26 of user core. Dec 16 15:29:38.602793 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 15:29:38.608000 audit[5908]: USER_START pid=5908 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:38.615548 kernel: audit: type=1105 audit(1765898978.608:918): pid=5908 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:38.616000 audit[5913]: CRED_ACQ pid=5913 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:38.622594 kernel: audit: type=1103 audit(1765898978.616:919): pid=5913 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:38.688717 containerd[1667]: time="2025-12-16T15:29:38.688651731Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:29:38.690134 containerd[1667]: time="2025-12-16T15:29:38.690084537Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 15:29:38.690234 containerd[1667]: time="2025-12-16T15:29:38.690205581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 15:29:38.690861 kubelet[3007]: E1216 15:29:38.690482 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 15:29:38.690861 kubelet[3007]: E1216 15:29:38.690596 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 15:29:38.691538 kubelet[3007]: E1216 15:29:38.691090 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-75675d747f-2cjwb_calico-system(71c27099-4499-4f5b-8630-5d35b5c1100b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 15:29:38.691538 kubelet[3007]: E1216 15:29:38.691201 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75675d747f-2cjwb" podUID="71c27099-4499-4f5b-8630-5d35b5c1100b" Dec 16 15:29:38.692592 containerd[1667]: time="2025-12-16T15:29:38.691880324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 15:29:39.023849 containerd[1667]: time="2025-12-16T15:29:39.023698377Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:29:39.025733 containerd[1667]: time="2025-12-16T15:29:39.025576811Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 15:29:39.025733 containerd[1667]: time="2025-12-16T15:29:39.025688359Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 15:29:39.026337 kubelet[3007]: E1216 15:29:39.026258 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 15:29:39.027155 kubelet[3007]: E1216 15:29:39.026349 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 15:29:39.027155 kubelet[3007]: E1216 15:29:39.026465 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7wvd4_calico-system(27e89a24-5a1a-4b44-908b-951574a9d075): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 15:29:39.029970 containerd[1667]: time="2025-12-16T15:29:39.029928569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 15:29:39.172467 sshd[5913]: Connection closed by 139.178.89.65 port 57424 Dec 16 15:29:39.173449 sshd-session[5908]: pam_unix(sshd:session): session closed for user core Dec 16 15:29:39.178000 audit[5908]: USER_END pid=5908 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:39.190538 kernel: audit: type=1106 audit(1765898979.178:920): pid=5908 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:39.178000 audit[5908]: CRED_DISP pid=5908 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:39.192215 systemd[1]: sshd@26-10.230.25.166:22-139.178.89.65:57424.service: Deactivated successfully. Dec 16 15:29:39.197845 kernel: audit: type=1104 audit(1765898979.178:921): pid=5908 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 15:29:39.196993 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 15:29:39.201796 systemd-logind[1641]: Session 26 logged out. Waiting for processes to exit. Dec 16 15:29:39.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.230.25.166:22-139.178.89.65:57424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:29:39.203833 systemd-logind[1641]: Removed session 26. Dec 16 15:29:39.354158 containerd[1667]: time="2025-12-16T15:29:39.354067210Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:29:39.355681 containerd[1667]: time="2025-12-16T15:29:39.355579523Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 15:29:39.355681 containerd[1667]: time="2025-12-16T15:29:39.355641569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 15:29:39.356458 kubelet[3007]: E1216 15:29:39.356026 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 15:29:39.356458 kubelet[3007]: E1216 15:29:39.356116 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 15:29:39.356458 kubelet[3007]: E1216 15:29:39.356282 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7wvd4_calico-system(27e89a24-5a1a-4b44-908b-951574a9d075): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 15:29:39.356458 kubelet[3007]: E1216 15:29:39.356373 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7wvd4" podUID="27e89a24-5a1a-4b44-908b-951574a9d075" Dec 16 15:29:39.682207 systemd[1]: Started sshd@27-10.230.25.166:22-37.142.173.63:52014.service - OpenSSH per-connection server daemon (37.142.173.63:52014). Dec 16 15:29:39.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.230.25.166:22-37.142.173.63:52014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:29:40.337583 kubelet[3007]: E1216 15:29:40.337485 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8d6c5b45-zv6vg" podUID="5a7920f3-ed03-480b-921f-a7a3eaa95ad5" Dec 16 15:29:41.159129 sshd[5925]: Invalid user unknown from 37.142.173.63 port 52014 Dec 16 15:29:41.452992 sshd[5925]: PAM user mismatch Dec 16 15:29:41.452000 audit[5925]: USER_ERR pid=5925 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=37.142.173.63 addr=37.142.173.63 terminal=ssh res=failed' Dec 16 15:29:41.457414 systemd[1]: sshd@27-10.230.25.166:22-37.142.173.63:52014.service: Deactivated successfully. Dec 16 15:29:41.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.230.25.166:22-37.142.173.63:52014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:29:42.338027 containerd[1667]: time="2025-12-16T15:29:42.337483048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 15:29:42.661300 containerd[1667]: time="2025-12-16T15:29:42.660939589Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:29:42.664196 containerd[1667]: time="2025-12-16T15:29:42.664103480Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 15:29:42.664462 containerd[1667]: time="2025-12-16T15:29:42.664331300Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 15:29:42.666171 kubelet[3007]: E1216 15:29:42.665397 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 15:29:42.666171 kubelet[3007]: E1216 15:29:42.665462 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 15:29:42.666171 kubelet[3007]: E1216 15:29:42.665609 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-c66d96545-p5cf6_calico-system(ee0bd064-b3c2-43ac-bd37-37a79e955339): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 15:29:42.667957 containerd[1667]: time="2025-12-16T15:29:42.667655240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 15:29:42.975821 containerd[1667]: time="2025-12-16T15:29:42.975580410Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 15:29:42.977257 containerd[1667]: time="2025-12-16T15:29:42.977165261Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 15:29:42.977551 containerd[1667]: time="2025-12-16T15:29:42.977403205Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 15:29:42.977995 kubelet[3007]: E1216 15:29:42.977928 3007 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 15:29:42.978180 kubelet[3007]: E1216 15:29:42.978121 3007 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 15:29:42.978576 kubelet[3007]: E1216 15:29:42.978538 3007 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-c66d96545-p5cf6_calico-system(ee0bd064-b3c2-43ac-bd37-37a79e955339): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 15:29:42.978989 kubelet[3007]: E1216 15:29:42.978926 3007 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c66d96545-p5cf6" podUID="ee0bd064-b3c2-43ac-bd37-37a79e955339" Dec 16 15:29:43.030176 kernel: kauditd_printk_skb: 4 callbacks suppressed Dec 16 15:29:43.030397 kernel: audit: type=1130 audit(1765898983.020:926): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.230.25.166:22-179.185.161.145:45736 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:29:43.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.230.25.166:22-179.185.161.145:45736 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 15:29:43.020996 systemd[1]: Started sshd@28-10.230.25.166:22-179.185.161.145:45736.service - OpenSSH per-connection server daemon (179.185.161.145:45736).