Dec 16 14:13:31.203137 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:17:57 -00 2025 Dec 16 14:13:31.203196 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 16 14:13:31.203223 kernel: BIOS-provided physical RAM map: Dec 16 14:13:31.203234 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 16 14:13:31.203277 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 16 14:13:31.203288 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 16 14:13:31.203312 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Dec 16 14:13:31.203329 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Dec 16 14:13:31.203340 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 16 14:13:31.203351 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 16 14:13:31.203362 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 14:13:31.203372 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 16 14:13:31.203388 kernel: NX (Execute Disable) protection: active Dec 16 14:13:31.203399 kernel: APIC: Static calls initialized Dec 16 14:13:31.203412 kernel: SMBIOS 2.8 present. Dec 16 14:13:31.203424 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Dec 16 14:13:31.203436 kernel: DMI: Memory slots populated: 1/1 Dec 16 14:13:31.203452 kernel: Hypervisor detected: KVM Dec 16 14:13:31.203464 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 16 14:13:31.203475 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 14:13:31.203487 kernel: kvm-clock: using sched offset of 4954589366 cycles Dec 16 14:13:31.203500 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 14:13:31.203512 kernel: tsc: Detected 2799.998 MHz processor Dec 16 14:13:31.203524 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 14:13:31.203536 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 14:13:31.203552 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 16 14:13:31.203565 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 16 14:13:31.203576 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 14:13:31.203588 kernel: Using GB pages for direct mapping Dec 16 14:13:31.203600 kernel: ACPI: Early table checksum verification disabled Dec 16 14:13:31.203611 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Dec 16 14:13:31.203623 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 14:13:31.203635 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 14:13:31.203664 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 14:13:31.203677 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Dec 16 14:13:31.203689 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 14:13:31.203700 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 14:13:31.203712 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 14:13:31.203724 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 14:13:31.203740 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Dec 16 14:13:31.203758 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Dec 16 14:13:31.203770 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Dec 16 14:13:31.203782 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Dec 16 14:13:31.203795 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Dec 16 14:13:31.203811 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Dec 16 14:13:31.203824 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Dec 16 14:13:31.203835 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 16 14:13:31.203848 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 16 14:13:31.203860 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Dec 16 14:13:31.203872 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Dec 16 14:13:31.203884 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Dec 16 14:13:31.203901 kernel: Zone ranges: Dec 16 14:13:31.203914 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 14:13:31.203926 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Dec 16 14:13:31.203938 kernel: Normal empty Dec 16 14:13:31.203950 kernel: Device empty Dec 16 14:13:31.203962 kernel: Movable zone start for each node Dec 16 14:13:31.203974 kernel: Early memory node ranges Dec 16 14:13:31.203986 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 16 14:13:31.204003 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Dec 16 14:13:31.204032 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Dec 16 14:13:31.204045 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 14:13:31.204058 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 16 14:13:31.204070 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Dec 16 14:13:31.204082 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 16 14:13:31.204100 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 14:13:31.204119 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 14:13:31.204132 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 16 14:13:31.204144 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 14:13:31.204157 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 14:13:31.204180 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 14:13:31.204192 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 14:13:31.204203 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 14:13:31.204219 kernel: TSC deadline timer available Dec 16 14:13:31.204243 kernel: CPU topo: Max. logical packages: 16 Dec 16 14:13:31.204254 kernel: CPU topo: Max. logical dies: 16 Dec 16 14:13:31.204265 kernel: CPU topo: Max. dies per package: 1 Dec 16 14:13:31.204276 kernel: CPU topo: Max. threads per core: 1 Dec 16 14:13:31.204287 kernel: CPU topo: Num. cores per package: 1 Dec 16 14:13:31.204311 kernel: CPU topo: Num. threads per package: 1 Dec 16 14:13:31.204322 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Dec 16 14:13:31.204338 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 14:13:31.204350 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 16 14:13:31.204374 kernel: Booting paravirtualized kernel on KVM Dec 16 14:13:31.204386 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 14:13:31.204398 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Dec 16 14:13:31.204410 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Dec 16 14:13:31.204422 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Dec 16 14:13:31.204992 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 16 14:13:31.205025 kernel: kvm-guest: PV spinlocks enabled Dec 16 14:13:31.205040 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 14:13:31.205055 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 16 14:13:31.205090 kernel: random: crng init done Dec 16 14:13:31.205103 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 14:13:31.205116 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 14:13:31.205150 kernel: Fallback order for Node 0: 0 Dec 16 14:13:31.205164 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Dec 16 14:13:31.205177 kernel: Policy zone: DMA32 Dec 16 14:13:31.206057 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 14:13:31.206100 kernel: software IO TLB: area num 16. Dec 16 14:13:31.206114 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 16 14:13:31.206127 kernel: Kernel/User page tables isolation: enabled Dec 16 14:13:31.206170 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 14:13:31.206183 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 14:13:31.206195 kernel: Dynamic Preempt: voluntary Dec 16 14:13:31.206220 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 14:13:31.206233 kernel: rcu: RCU event tracing is enabled. Dec 16 14:13:31.206245 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 16 14:13:31.206257 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 14:13:31.206292 kernel: Rude variant of Tasks RCU enabled. Dec 16 14:13:31.206305 kernel: Tracing variant of Tasks RCU enabled. Dec 16 14:13:31.206318 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 14:13:31.206330 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 16 14:13:31.206343 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 16 14:13:31.206355 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 16 14:13:31.206368 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 16 14:13:31.206442 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Dec 16 14:13:31.206457 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 14:13:31.206491 kernel: Console: colour VGA+ 80x25 Dec 16 14:13:31.206513 kernel: printk: legacy console [tty0] enabled Dec 16 14:13:31.206526 kernel: printk: legacy console [ttyS0] enabled Dec 16 14:13:31.206542 kernel: ACPI: Core revision 20240827 Dec 16 14:13:31.206556 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 14:13:31.206569 kernel: x2apic enabled Dec 16 14:13:31.206582 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 14:13:31.206595 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Dec 16 14:13:31.206618 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Dec 16 14:13:31.206631 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 14:13:31.206657 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 16 14:13:31.206670 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 16 14:13:31.206693 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 14:13:31.206706 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 14:13:31.206718 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 14:13:31.206731 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 16 14:13:31.206744 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 14:13:31.206756 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 14:13:31.206769 kernel: MDS: Mitigation: Clear CPU buffers Dec 16 14:13:31.206781 kernel: MMIO Stale Data: Unknown: No mitigations Dec 16 14:13:31.206793 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 16 14:13:31.206806 kernel: active return thunk: its_return_thunk Dec 16 14:13:31.206818 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 16 14:13:31.206841 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 14:13:31.206854 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 14:13:31.206867 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 14:13:31.206879 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 14:13:31.206892 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 16 14:13:31.206904 kernel: Freeing SMP alternatives memory: 32K Dec 16 14:13:31.206917 kernel: pid_max: default: 32768 minimum: 301 Dec 16 14:13:31.206929 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 14:13:31.206942 kernel: landlock: Up and running. Dec 16 14:13:31.206964 kernel: SELinux: Initializing. Dec 16 14:13:31.206977 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 14:13:31.206990 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 14:13:31.207003 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Dec 16 14:13:31.207035 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Dec 16 14:13:31.207049 kernel: signal: max sigframe size: 1776 Dec 16 14:13:31.207063 kernel: rcu: Hierarchical SRCU implementation. Dec 16 14:13:31.207076 kernel: rcu: Max phase no-delay instances is 400. Dec 16 14:13:31.207089 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Dec 16 14:13:31.207114 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 16 14:13:31.207128 kernel: smp: Bringing up secondary CPUs ... Dec 16 14:13:31.207141 kernel: smpboot: x86: Booting SMP configuration: Dec 16 14:13:31.207154 kernel: .... node #0, CPUs: #1 Dec 16 14:13:31.207167 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 14:13:31.207179 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Dec 16 14:13:31.207193 kernel: Memory: 1914108K/2096616K available (14336K kernel code, 2444K rwdata, 29892K rodata, 15464K init, 2576K bss, 176492K reserved, 0K cma-reserved) Dec 16 14:13:31.207216 kernel: devtmpfs: initialized Dec 16 14:13:31.207230 kernel: x86/mm: Memory block size: 128MB Dec 16 14:13:31.207243 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 14:13:31.207256 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 16 14:13:31.207268 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 14:13:31.207281 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 14:13:31.207294 kernel: audit: initializing netlink subsys (disabled) Dec 16 14:13:31.207317 kernel: audit: type=2000 audit(1765894407.819:1): state=initialized audit_enabled=0 res=1 Dec 16 14:13:31.207330 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 14:13:31.207343 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 14:13:31.207356 kernel: cpuidle: using governor menu Dec 16 14:13:31.207369 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 14:13:31.207382 kernel: dca service started, version 1.12.1 Dec 16 14:13:31.207399 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Dec 16 14:13:31.207421 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 16 14:13:31.207435 kernel: PCI: Using configuration type 1 for base access Dec 16 14:13:31.207448 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 14:13:31.207460 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 14:13:31.207473 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 14:13:31.207486 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 14:13:31.207499 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 14:13:31.207521 kernel: ACPI: Added _OSI(Module Device) Dec 16 14:13:31.207535 kernel: ACPI: Added _OSI(Processor Device) Dec 16 14:13:31.207548 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 14:13:31.207560 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 14:13:31.207573 kernel: ACPI: Interpreter enabled Dec 16 14:13:31.207586 kernel: ACPI: PM: (supports S0 S5) Dec 16 14:13:31.207599 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 14:13:31.207621 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 14:13:31.207634 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 14:13:31.207657 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 16 14:13:31.207670 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 14:13:31.208078 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 14:13:31.208328 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 16 14:13:31.208554 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 16 14:13:31.208575 kernel: PCI host bridge to bus 0000:00 Dec 16 14:13:31.208801 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 14:13:31.212130 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 14:13:31.212336 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 14:13:31.212545 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 16 14:13:31.212783 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 16 14:13:31.212972 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Dec 16 14:13:31.215192 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 14:13:31.215464 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 16 14:13:31.215716 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Dec 16 14:13:31.215943 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Dec 16 14:13:31.216170 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Dec 16 14:13:31.216388 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Dec 16 14:13:31.216618 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 14:13:31.216858 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 14:13:31.219101 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Dec 16 14:13:31.219376 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 16 14:13:31.219598 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 16 14:13:31.219818 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 14:13:31.220061 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 14:13:31.220269 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Dec 16 14:13:31.220470 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 16 14:13:31.220703 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 16 14:13:31.220904 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 14:13:31.221144 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 14:13:31.221349 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Dec 16 14:13:31.221574 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 16 14:13:31.221788 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 16 14:13:31.222008 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 14:13:31.222241 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 14:13:31.222444 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Dec 16 14:13:31.222657 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 16 14:13:31.222860 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 16 14:13:31.223079 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 14:13:31.223317 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 14:13:31.223529 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Dec 16 14:13:31.223743 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 16 14:13:31.223943 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 16 14:13:31.224172 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 14:13:31.224384 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 14:13:31.224603 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Dec 16 14:13:31.224817 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 16 14:13:31.225029 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 16 14:13:31.225234 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 14:13:31.225449 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 14:13:31.225692 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Dec 16 14:13:31.225895 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 16 14:13:31.226140 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 16 14:13:31.226344 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 14:13:31.226553 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 14:13:31.226785 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Dec 16 14:13:31.226986 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 16 14:13:31.227206 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 16 14:13:31.227407 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 14:13:31.227617 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 16 14:13:31.227833 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Dec 16 14:13:31.228072 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Dec 16 14:13:31.228277 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Dec 16 14:13:31.228477 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Dec 16 14:13:31.228731 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 16 14:13:31.228932 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Dec 16 14:13:31.229154 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Dec 16 14:13:31.229374 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Dec 16 14:13:31.229585 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 16 14:13:31.229802 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 16 14:13:31.230043 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 16 14:13:31.230249 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Dec 16 14:13:31.230452 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Dec 16 14:13:31.230694 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 16 14:13:31.230894 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Dec 16 14:13:31.231135 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Dec 16 14:13:31.231375 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Dec 16 14:13:31.231582 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 16 14:13:31.231816 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 16 14:13:31.232036 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 16 14:13:31.232250 kernel: pci_bus 0000:02: extended config space not accessible Dec 16 14:13:31.232475 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Dec 16 14:13:31.232701 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Dec 16 14:13:31.232906 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 16 14:13:31.233156 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Dec 16 14:13:31.233364 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Dec 16 14:13:31.233566 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 16 14:13:31.233804 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Dec 16 14:13:31.234024 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Dec 16 14:13:31.234244 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 16 14:13:31.234447 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 16 14:13:31.234661 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 16 14:13:31.234864 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 16 14:13:31.235084 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 16 14:13:31.235286 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 16 14:13:31.235321 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 14:13:31.235335 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 14:13:31.235349 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 14:13:31.235362 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 14:13:31.235375 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 16 14:13:31.235393 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 16 14:13:31.235407 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 16 14:13:31.235430 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 16 14:13:31.235443 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 16 14:13:31.235456 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 16 14:13:31.235470 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 16 14:13:31.235483 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 16 14:13:31.235496 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 16 14:13:31.235509 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 16 14:13:31.235531 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 16 14:13:31.235545 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 16 14:13:31.235559 kernel: iommu: Default domain type: Translated Dec 16 14:13:31.235572 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 14:13:31.235585 kernel: PCI: Using ACPI for IRQ routing Dec 16 14:13:31.235598 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 14:13:31.235612 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 16 14:13:31.235634 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Dec 16 14:13:31.235845 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 16 14:13:31.236068 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 16 14:13:31.236269 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 14:13:31.236288 kernel: vgaarb: loaded Dec 16 14:13:31.236302 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 14:13:31.236315 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 14:13:31.236344 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 14:13:31.236358 kernel: pnp: PnP ACPI init Dec 16 14:13:31.236588 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 16 14:13:31.236610 kernel: pnp: PnP ACPI: found 5 devices Dec 16 14:13:31.236624 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 14:13:31.236647 kernel: NET: Registered PF_INET protocol family Dec 16 14:13:31.236662 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 14:13:31.236688 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 16 14:13:31.236702 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 14:13:31.236716 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 14:13:31.236729 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 16 14:13:31.236742 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 16 14:13:31.236756 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 14:13:31.236778 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 14:13:31.236792 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 14:13:31.236806 kernel: NET: Registered PF_XDP protocol family Dec 16 14:13:31.237006 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Dec 16 14:13:31.237228 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 16 14:13:31.237429 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 16 14:13:31.237629 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 16 14:13:31.237857 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 16 14:13:31.238081 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 16 14:13:31.238283 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 16 14:13:31.238483 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 16 14:13:31.238698 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Dec 16 14:13:31.238898 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Dec 16 14:13:31.239117 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Dec 16 14:13:31.239336 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Dec 16 14:13:31.239536 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Dec 16 14:13:31.239750 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Dec 16 14:13:31.239950 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Dec 16 14:13:31.240170 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Dec 16 14:13:31.240376 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 16 14:13:31.240659 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 16 14:13:31.240863 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 16 14:13:31.241096 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 16 14:13:31.241298 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 16 14:13:31.241497 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 14:13:31.241710 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 16 14:13:31.241909 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 16 14:13:31.242146 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 16 14:13:31.242347 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 14:13:31.242547 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 16 14:13:31.242763 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 16 14:13:31.250290 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 16 14:13:31.250540 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 14:13:31.250784 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 16 14:13:31.250994 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 16 14:13:31.251227 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 16 14:13:31.251433 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 14:13:31.251673 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 16 14:13:31.251887 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 16 14:13:31.252112 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 16 14:13:31.252318 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 14:13:31.252524 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 16 14:13:31.252740 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 16 14:13:31.252961 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 16 14:13:31.253195 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 14:13:31.253404 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 16 14:13:31.253608 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 16 14:13:31.253832 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 16 14:13:31.256368 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 14:13:31.256603 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 16 14:13:31.256832 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 16 14:13:31.257084 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 16 14:13:31.257292 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 14:13:31.257522 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 14:13:31.257736 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 14:13:31.257942 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 14:13:31.258165 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 16 14:13:31.258370 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 16 14:13:31.258557 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Dec 16 14:13:31.258779 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 16 14:13:31.258975 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Dec 16 14:13:31.259197 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 14:13:31.259403 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 16 14:13:31.259631 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Dec 16 14:13:31.259861 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 16 14:13:31.261491 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 14:13:31.261721 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Dec 16 14:13:31.261917 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 16 14:13:31.262149 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 14:13:31.262354 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Dec 16 14:13:31.262547 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 16 14:13:31.262754 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 14:13:31.262958 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Dec 16 14:13:31.264201 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 16 14:13:31.264420 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 14:13:31.264633 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Dec 16 14:13:31.264852 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 16 14:13:31.265066 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 14:13:31.265273 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Dec 16 14:13:31.265468 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Dec 16 14:13:31.265702 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 14:13:31.265906 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Dec 16 14:13:31.267405 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 16 14:13:31.267612 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 14:13:31.267647 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 16 14:13:31.267664 kernel: PCI: CLS 0 bytes, default 64 Dec 16 14:13:31.267694 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 16 14:13:31.267709 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Dec 16 14:13:31.267723 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 16 14:13:31.267737 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Dec 16 14:13:31.267751 kernel: Initialise system trusted keyrings Dec 16 14:13:31.267765 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 16 14:13:31.267779 kernel: Key type asymmetric registered Dec 16 14:13:31.267802 kernel: Asymmetric key parser 'x509' registered Dec 16 14:13:31.267816 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 14:13:31.267830 kernel: io scheduler mq-deadline registered Dec 16 14:13:31.267844 kernel: io scheduler kyber registered Dec 16 14:13:31.267857 kernel: io scheduler bfq registered Dec 16 14:13:31.268096 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 16 14:13:31.268307 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 16 14:13:31.268533 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 14:13:31.268757 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 16 14:13:31.268965 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 16 14:13:31.269194 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 14:13:31.269410 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 16 14:13:31.269647 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 16 14:13:31.269862 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 14:13:31.270102 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 16 14:13:31.270310 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 16 14:13:31.270515 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 14:13:31.270764 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 16 14:13:31.270971 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 16 14:13:31.271225 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 14:13:31.271584 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 16 14:13:31.271828 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 16 14:13:31.272116 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 14:13:31.272330 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 16 14:13:31.272535 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 16 14:13:31.272754 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 14:13:31.272958 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 16 14:13:31.273206 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 16 14:13:31.273442 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 14:13:31.273463 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 14:13:31.273478 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 16 14:13:31.273492 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 16 14:13:31.273506 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 14:13:31.273535 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 14:13:31.273550 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 14:13:31.273563 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 14:13:31.273577 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 14:13:31.273801 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 16 14:13:31.273824 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 14:13:31.274058 kernel: rtc_cmos 00:03: registered as rtc0 Dec 16 14:13:31.274283 kernel: rtc_cmos 00:03: setting system clock to 2025-12-16T14:13:29 UTC (1765894409) Dec 16 14:13:31.274504 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 16 14:13:31.274525 kernel: intel_pstate: CPU model not supported Dec 16 14:13:31.274539 kernel: NET: Registered PF_INET6 protocol family Dec 16 14:13:31.274559 kernel: Segment Routing with IPv6 Dec 16 14:13:31.274573 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 14:13:31.274601 kernel: NET: Registered PF_PACKET protocol family Dec 16 14:13:31.274615 kernel: Key type dns_resolver registered Dec 16 14:13:31.274645 kernel: IPI shorthand broadcast: enabled Dec 16 14:13:31.274661 kernel: sched_clock: Marking stable (2127044771, 225155553)->(2474547695, -122347371) Dec 16 14:13:31.274675 kernel: registered taskstats version 1 Dec 16 14:13:31.274688 kernel: Loading compiled-in X.509 certificates Dec 16 14:13:31.274702 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: b90706f42f055ab9f35fc8fc29156d877adb12c4' Dec 16 14:13:31.274726 kernel: Demotion targets for Node 0: null Dec 16 14:13:31.274741 kernel: Key type .fscrypt registered Dec 16 14:13:31.274755 kernel: Key type fscrypt-provisioning registered Dec 16 14:13:31.274769 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 14:13:31.274782 kernel: ima: Allocated hash algorithm: sha1 Dec 16 14:13:31.274796 kernel: ima: No architecture policies found Dec 16 14:13:31.274810 kernel: clk: Disabling unused clocks Dec 16 14:13:31.274823 kernel: Freeing unused kernel image (initmem) memory: 15464K Dec 16 14:13:31.274846 kernel: Write protecting the kernel read-only data: 45056k Dec 16 14:13:31.274860 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Dec 16 14:13:31.274874 kernel: Run /init as init process Dec 16 14:13:31.274888 kernel: with arguments: Dec 16 14:13:31.274902 kernel: /init Dec 16 14:13:31.274925 kernel: with environment: Dec 16 14:13:31.274938 kernel: HOME=/ Dec 16 14:13:31.274961 kernel: TERM=linux Dec 16 14:13:31.274981 kernel: ACPI: bus type USB registered Dec 16 14:13:31.274995 kernel: usbcore: registered new interface driver usbfs Dec 16 14:13:31.275022 kernel: usbcore: registered new interface driver hub Dec 16 14:13:31.275043 kernel: usbcore: registered new device driver usb Dec 16 14:13:31.275267 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 16 14:13:31.275479 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Dec 16 14:13:31.275718 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 16 14:13:31.275929 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 16 14:13:31.276158 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Dec 16 14:13:31.276376 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Dec 16 14:13:31.276655 kernel: hub 1-0:1.0: USB hub found Dec 16 14:13:31.276882 kernel: hub 1-0:1.0: 4 ports detected Dec 16 14:13:31.277156 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 16 14:13:31.277400 kernel: hub 2-0:1.0: USB hub found Dec 16 14:13:31.277625 kernel: hub 2-0:1.0: 4 ports detected Dec 16 14:13:31.277656 kernel: SCSI subsystem initialized Dec 16 14:13:31.277671 kernel: libata version 3.00 loaded. Dec 16 14:13:31.277897 kernel: ahci 0000:00:1f.2: version 3.0 Dec 16 14:13:31.277920 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 16 14:13:31.278149 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 16 14:13:31.278354 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 16 14:13:31.278557 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 16 14:13:31.278804 kernel: scsi host0: ahci Dec 16 14:13:31.279106 kernel: scsi host1: ahci Dec 16 14:13:31.279334 kernel: scsi host2: ahci Dec 16 14:13:31.279547 kernel: scsi host3: ahci Dec 16 14:13:31.279794 kernel: scsi host4: ahci Dec 16 14:13:31.280046 kernel: scsi host5: ahci Dec 16 14:13:31.280082 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 35 lpm-pol 1 Dec 16 14:13:31.280097 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 35 lpm-pol 1 Dec 16 14:13:31.280111 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 35 lpm-pol 1 Dec 16 14:13:31.280125 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 35 lpm-pol 1 Dec 16 14:13:31.280139 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 35 lpm-pol 1 Dec 16 14:13:31.280162 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 35 lpm-pol 1 Dec 16 14:13:31.280400 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 16 14:13:31.280430 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 14:13:31.280444 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 16 14:13:31.280458 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 16 14:13:31.280471 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 16 14:13:31.280485 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 16 14:13:31.280499 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 16 14:13:31.280517 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 16 14:13:31.280765 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Dec 16 14:13:31.280788 kernel: usbcore: registered new interface driver usbhid Dec 16 14:13:31.280983 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 16 14:13:31.281004 kernel: usbhid: USB HID core driver Dec 16 14:13:31.281035 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 14:13:31.281056 kernel: GPT:25804799 != 125829119 Dec 16 14:13:31.281070 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 16 14:13:31.281084 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 14:13:31.281355 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Dec 16 14:13:31.281377 kernel: GPT:25804799 != 125829119 Dec 16 14:13:31.281391 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 14:13:31.281405 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 14:13:31.281425 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 14:13:31.281439 kernel: device-mapper: uevent: version 1.0.3 Dec 16 14:13:31.281453 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 14:13:31.281467 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 16 14:13:31.281481 kernel: raid6: sse2x4 gen() 14229 MB/s Dec 16 14:13:31.281494 kernel: raid6: sse2x2 gen() 9684 MB/s Dec 16 14:13:31.281507 kernel: raid6: sse2x1 gen() 9963 MB/s Dec 16 14:13:31.281526 kernel: raid6: using algorithm sse2x4 gen() 14229 MB/s Dec 16 14:13:31.281540 kernel: raid6: .... xor() 7997 MB/s, rmw enabled Dec 16 14:13:31.281553 kernel: raid6: using ssse3x2 recovery algorithm Dec 16 14:13:31.281567 kernel: xor: automatically using best checksumming function avx Dec 16 14:13:31.281580 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 14:13:31.281594 kernel: BTRFS: device fsid ea73a94a-fb20-4d45-8448-4c6f4c422a4f devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (193) Dec 16 14:13:31.281608 kernel: BTRFS info (device dm-0): first mount of filesystem ea73a94a-fb20-4d45-8448-4c6f4c422a4f Dec 16 14:13:31.281626 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 14:13:31.281652 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 14:13:31.281666 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 14:13:31.281680 kernel: loop: module loaded Dec 16 14:13:31.281694 kernel: loop0: detected capacity change from 0 to 100136 Dec 16 14:13:31.281708 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 14:13:31.281728 systemd[1]: Successfully made /usr/ read-only. Dec 16 14:13:31.281753 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 14:13:31.281769 systemd[1]: Detected virtualization kvm. Dec 16 14:13:31.281783 systemd[1]: Detected architecture x86-64. Dec 16 14:13:31.281796 systemd[1]: Running in initrd. Dec 16 14:13:31.281810 systemd[1]: No hostname configured, using default hostname. Dec 16 14:13:31.281829 systemd[1]: Hostname set to . Dec 16 14:13:31.281844 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 14:13:31.281858 systemd[1]: Queued start job for default target initrd.target. Dec 16 14:13:31.281872 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 14:13:31.281887 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 14:13:31.281901 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 14:13:31.281916 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 14:13:31.281936 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 14:13:31.281951 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 14:13:31.281966 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 14:13:31.281981 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 14:13:31.281995 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 14:13:31.282043 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 14:13:31.282059 systemd[1]: Reached target paths.target - Path Units. Dec 16 14:13:31.282080 systemd[1]: Reached target slices.target - Slice Units. Dec 16 14:13:31.282094 systemd[1]: Reached target swap.target - Swaps. Dec 16 14:13:31.282109 systemd[1]: Reached target timers.target - Timer Units. Dec 16 14:13:31.282123 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 14:13:31.282137 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 14:13:31.282156 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 14:13:31.282171 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 14:13:31.282186 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 14:13:31.282200 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 14:13:31.282214 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 14:13:31.282228 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 14:13:31.282243 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 14:13:31.282261 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 14:13:31.282277 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 14:13:31.282291 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 14:13:31.282305 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 14:13:31.282320 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 14:13:31.282335 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 14:13:31.282349 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 14:13:31.282369 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 14:13:31.282384 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 14:13:31.282398 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 14:13:31.282417 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 14:13:31.282432 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 14:13:31.282447 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 14:13:31.282500 systemd-journald[330]: Collecting audit messages is enabled. Dec 16 14:13:31.282551 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 14:13:31.282567 kernel: audit: type=1130 audit(1765894411.242:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:31.282581 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 14:13:31.282596 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 14:13:31.282621 kernel: Bridge firewalling registered Dec 16 14:13:31.282644 systemd-journald[330]: Journal started Dec 16 14:13:31.282687 systemd-journald[330]: Runtime Journal (/run/log/journal/e45ca4d8db314d859d2bff6a0ee3fda0) is 4.7M, max 37.8M, 33M free. Dec 16 14:13:31.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:31.279120 systemd-modules-load[334]: Inserted module 'br_netfilter' Dec 16 14:13:31.339286 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 14:13:31.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:31.346841 kernel: audit: type=1130 audit(1765894411.340:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:31.346889 kernel: audit: type=1130 audit(1765894411.346:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:31.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:31.345366 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 14:13:31.347263 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 14:13:31.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:31.359329 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 14:13:31.360300 kernel: audit: type=1130 audit(1765894411.353:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:31.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:31.365199 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 14:13:31.366930 kernel: audit: type=1130 audit(1765894411.361:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:31.369225 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 14:13:31.376185 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 14:13:31.393241 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 14:13:31.393978 systemd-tmpfiles[356]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 14:13:31.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:31.399597 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 14:13:31.403492 kernel: audit: type=1130 audit(1765894411.397:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:31.398000 audit: BPF prog-id=6 op=LOAD Dec 16 14:13:31.409043 kernel: audit: type=1334 audit(1765894411.398:8): prog-id=6 op=LOAD Dec 16 14:13:31.409367 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 14:13:31.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:31.416047 kernel: audit: type=1130 audit(1765894411.410:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:31.421182 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 14:13:31.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:31.425107 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 14:13:31.430170 kernel: audit: type=1130 audit(1765894411.422:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:31.463719 dracut-cmdline[374]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 16 14:13:31.486380 systemd-resolved[368]: Positive Trust Anchors: Dec 16 14:13:31.486402 systemd-resolved[368]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 14:13:31.486408 systemd-resolved[368]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 14:13:31.486450 systemd-resolved[368]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 14:13:31.517052 systemd-resolved[368]: Defaulting to hostname 'linux'. Dec 16 14:13:31.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:31.518699 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 14:13:31.519519 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 14:13:31.609050 kernel: Loading iSCSI transport class v2.0-870. Dec 16 14:13:31.625059 kernel: iscsi: registered transport (tcp) Dec 16 14:13:31.652347 kernel: iscsi: registered transport (qla4xxx) Dec 16 14:13:31.652410 kernel: QLogic iSCSI HBA Driver Dec 16 14:13:31.684358 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 14:13:31.704662 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 14:13:31.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:31.708254 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 14:13:31.771442 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 14:13:31.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:31.775372 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 14:13:31.778202 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 14:13:31.822346 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 14:13:31.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:31.824000 audit: BPF prog-id=7 op=LOAD Dec 16 14:13:31.824000 audit: BPF prog-id=8 op=LOAD Dec 16 14:13:31.826251 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 14:13:31.861453 systemd-udevd[611]: Using default interface naming scheme 'v257'. Dec 16 14:13:31.877296 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 14:13:31.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:31.882410 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 14:13:31.918086 dracut-pre-trigger[682]: rd.md=0: removing MD RAID activation Dec 16 14:13:31.921000 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 14:13:31.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:31.925000 audit: BPF prog-id=9 op=LOAD Dec 16 14:13:31.927579 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 14:13:31.957987 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 14:13:31.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:31.961847 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 14:13:31.989930 systemd-networkd[722]: lo: Link UP Dec 16 14:13:31.990778 systemd-networkd[722]: lo: Gained carrier Dec 16 14:13:31.992220 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 14:13:31.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:31.993034 systemd[1]: Reached target network.target - Network. Dec 16 14:13:32.112820 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 14:13:32.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:32.116547 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 14:13:32.225803 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 14:13:32.244677 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 14:13:32.270566 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 14:13:32.300881 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 14:13:32.303385 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 14:13:32.325814 disk-uuid[777]: Primary Header is updated. Dec 16 14:13:32.325814 disk-uuid[777]: Secondary Entries is updated. Dec 16 14:13:32.325814 disk-uuid[777]: Secondary Header is updated. Dec 16 14:13:32.408592 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 14:13:32.440074 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 16 14:13:32.445033 kernel: AES CTR mode by8 optimization enabled Dec 16 14:13:32.449855 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 14:13:32.450177 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 14:13:32.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:32.452668 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 14:13:32.454934 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 14:13:32.468287 systemd-networkd[722]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 14:13:32.468300 systemd-networkd[722]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 14:13:32.475769 systemd-networkd[722]: eth0: Link UP Dec 16 14:13:32.480132 systemd-networkd[722]: eth0: Gained carrier Dec 16 14:13:32.480154 systemd-networkd[722]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 14:13:32.497132 systemd-networkd[722]: eth0: DHCPv4 address 10.230.52.194/30, gateway 10.230.52.193 acquired from 10.230.52.193 Dec 16 14:13:32.611182 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 14:13:32.615030 kernel: kauditd_printk_skb: 13 callbacks suppressed Dec 16 14:13:32.615081 kernel: audit: type=1130 audit(1765894412.612:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:32.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:32.637816 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 14:13:32.644530 kernel: audit: type=1130 audit(1765894412.638:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:32.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:32.639336 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 14:13:32.645313 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 14:13:32.646891 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 14:13:32.649740 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 14:13:32.675382 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 14:13:32.681687 kernel: audit: type=1130 audit(1765894412.676:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:32.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:33.392950 disk-uuid[778]: Warning: The kernel is still using the old partition table. Dec 16 14:13:33.392950 disk-uuid[778]: The new table will be used at the next reboot or after you Dec 16 14:13:33.392950 disk-uuid[778]: run partprobe(8) or kpartx(8) Dec 16 14:13:33.392950 disk-uuid[778]: The operation has completed successfully. Dec 16 14:13:33.404301 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 14:13:33.404470 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 14:13:33.415888 kernel: audit: type=1130 audit(1765894413.405:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:33.415926 kernel: audit: type=1131 audit(1765894413.405:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:33.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:33.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:33.407759 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 14:13:33.449058 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (863) Dec 16 14:13:33.458658 kernel: BTRFS info (device vda6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 14:13:33.458727 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 14:13:33.465948 kernel: BTRFS info (device vda6): turning on async discard Dec 16 14:13:33.466024 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 14:13:33.475157 kernel: BTRFS info (device vda6): last unmount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 14:13:33.475412 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 14:13:33.481515 kernel: audit: type=1130 audit(1765894413.476:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:33.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:33.479189 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 14:13:33.697119 ignition[882]: Ignition 2.22.0 Dec 16 14:13:33.698094 ignition[882]: Stage: fetch-offline Dec 16 14:13:33.698191 ignition[882]: no configs at "/usr/lib/ignition/base.d" Dec 16 14:13:33.698212 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 14:13:33.700373 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 14:13:33.708006 kernel: audit: type=1130 audit(1765894413.702:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:33.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:33.698363 ignition[882]: parsed url from cmdline: "" Dec 16 14:13:33.704261 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 14:13:33.698370 ignition[882]: no config URL provided Dec 16 14:13:33.698386 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 14:13:33.698404 ignition[882]: no config at "/usr/lib/ignition/user.ign" Dec 16 14:13:33.698418 ignition[882]: failed to fetch config: resource requires networking Dec 16 14:13:33.698642 ignition[882]: Ignition finished successfully Dec 16 14:13:33.738229 ignition[889]: Ignition 2.22.0 Dec 16 14:13:33.738252 ignition[889]: Stage: fetch Dec 16 14:13:33.738466 ignition[889]: no configs at "/usr/lib/ignition/base.d" Dec 16 14:13:33.738483 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 14:13:33.738617 ignition[889]: parsed url from cmdline: "" Dec 16 14:13:33.738624 ignition[889]: no config URL provided Dec 16 14:13:33.738633 ignition[889]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 14:13:33.738647 ignition[889]: no config at "/usr/lib/ignition/user.ign" Dec 16 14:13:33.738824 ignition[889]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 16 14:13:33.739175 ignition[889]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 16 14:13:33.739207 ignition[889]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 16 14:13:33.753211 systemd-networkd[722]: eth0: Gained IPv6LL Dec 16 14:13:33.754463 ignition[889]: GET result: OK Dec 16 14:13:33.755273 ignition[889]: parsing config with SHA512: 3b053e1640a8bd100f2f303a5c9abeb489f18130e9a326b8b19055680b735f214e6c840918f9a4752386bf984f1fc45e81a0cd3074747a8f6022406034c2374c Dec 16 14:13:33.761705 unknown[889]: fetched base config from "system" Dec 16 14:13:33.761726 unknown[889]: fetched base config from "system" Dec 16 14:13:33.762229 ignition[889]: fetch: fetch complete Dec 16 14:13:33.761735 unknown[889]: fetched user config from "openstack" Dec 16 14:13:33.762236 ignition[889]: fetch: fetch passed Dec 16 14:13:33.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:33.764395 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 14:13:33.772003 kernel: audit: type=1130 audit(1765894413.765:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:33.762298 ignition[889]: Ignition finished successfully Dec 16 14:13:33.768195 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 14:13:33.802012 ignition[895]: Ignition 2.22.0 Dec 16 14:13:33.802029 ignition[895]: Stage: kargs Dec 16 14:13:33.802231 ignition[895]: no configs at "/usr/lib/ignition/base.d" Dec 16 14:13:33.802248 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 14:13:33.806883 ignition[895]: kargs: kargs passed Dec 16 14:13:33.806957 ignition[895]: Ignition finished successfully Dec 16 14:13:33.810237 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 14:13:33.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:33.813589 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 14:13:33.817924 kernel: audit: type=1130 audit(1765894413.810:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:33.850170 ignition[902]: Ignition 2.22.0 Dec 16 14:13:33.850204 ignition[902]: Stage: disks Dec 16 14:13:33.850428 ignition[902]: no configs at "/usr/lib/ignition/base.d" Dec 16 14:13:33.850445 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 14:13:33.851695 ignition[902]: disks: disks passed Dec 16 14:13:33.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:33.853156 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 14:13:33.861050 kernel: audit: type=1130 audit(1765894413.854:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:33.851763 ignition[902]: Ignition finished successfully Dec 16 14:13:33.854775 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 14:13:33.860336 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 14:13:33.861822 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 14:13:33.863124 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 14:13:33.864673 systemd[1]: Reached target basic.target - Basic System. Dec 16 14:13:33.868273 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 14:13:33.911729 systemd-fsck[911]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Dec 16 14:13:33.915944 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 14:13:33.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:33.918906 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 14:13:34.059077 kernel: EXT4-fs (vda9): mounted filesystem 7cac6192-738c-43cc-9341-24f71d091e91 r/w with ordered data mode. Quota mode: none. Dec 16 14:13:34.059639 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 14:13:34.061173 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 14:13:34.063946 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 14:13:34.066509 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 14:13:34.069184 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 14:13:34.074243 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Dec 16 14:13:34.075039 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 14:13:34.076118 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 14:13:34.088521 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (919) Dec 16 14:13:34.092363 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 14:13:34.095886 kernel: BTRFS info (device vda6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 14:13:34.095912 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 14:13:34.098213 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 14:13:34.128009 kernel: BTRFS info (device vda6): turning on async discard Dec 16 14:13:34.128092 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 14:13:34.139362 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 14:13:34.180041 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 14:13:34.197385 initrd-setup-root[947]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 14:13:34.205351 initrd-setup-root[954]: cut: /sysroot/etc/group: No such file or directory Dec 16 14:13:34.210737 initrd-setup-root[961]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 14:13:34.216899 initrd-setup-root[968]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 14:13:34.325410 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 14:13:34.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:34.328680 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 14:13:34.330212 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 14:13:34.369047 kernel: BTRFS info (device vda6): last unmount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 14:13:34.395970 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 14:13:34.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:34.405286 ignition[1036]: INFO : Ignition 2.22.0 Dec 16 14:13:34.405286 ignition[1036]: INFO : Stage: mount Dec 16 14:13:34.408024 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 14:13:34.408024 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 14:13:34.410827 ignition[1036]: INFO : mount: mount passed Dec 16 14:13:34.410827 ignition[1036]: INFO : Ignition finished successfully Dec 16 14:13:34.412673 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 14:13:34.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:34.437849 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 14:13:35.210622 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 14:13:35.259215 systemd-networkd[722]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8d30:24:19ff:fee6:34c2/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8d30:24:19ff:fee6:34c2/64 assigned by NDisc. Dec 16 14:13:35.259231 systemd-networkd[722]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 16 14:13:37.219139 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 14:13:41.228068 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 14:13:41.239543 coreos-metadata[921]: Dec 16 14:13:41.239 WARN failed to locate config-drive, using the metadata service API instead Dec 16 14:13:41.264259 coreos-metadata[921]: Dec 16 14:13:41.264 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 16 14:13:41.278178 coreos-metadata[921]: Dec 16 14:13:41.278 INFO Fetch successful Dec 16 14:13:41.279219 coreos-metadata[921]: Dec 16 14:13:41.279 INFO wrote hostname srv-6slrx.gb1.brightbox.com to /sysroot/etc/hostname Dec 16 14:13:41.281216 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 16 14:13:41.295969 kernel: kauditd_printk_skb: 4 callbacks suppressed Dec 16 14:13:41.296003 kernel: audit: type=1130 audit(1765894421.283:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:41.296040 kernel: audit: type=1131 audit(1765894421.283:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:41.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:41.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:41.281439 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Dec 16 14:13:41.285813 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 14:13:41.317890 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 14:13:41.342060 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1053) Dec 16 14:13:41.345483 kernel: BTRFS info (device vda6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 14:13:41.345519 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 14:13:41.353554 kernel: BTRFS info (device vda6): turning on async discard Dec 16 14:13:41.353602 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 14:13:41.356249 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 14:13:41.398365 ignition[1071]: INFO : Ignition 2.22.0 Dec 16 14:13:41.398365 ignition[1071]: INFO : Stage: files Dec 16 14:13:41.400112 ignition[1071]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 14:13:41.400112 ignition[1071]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 14:13:41.400112 ignition[1071]: DEBUG : files: compiled without relabeling support, skipping Dec 16 14:13:41.402698 ignition[1071]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 14:13:41.402698 ignition[1071]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 14:13:41.405307 ignition[1071]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 14:13:41.406429 ignition[1071]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 14:13:41.407558 ignition[1071]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 14:13:41.407442 unknown[1071]: wrote ssh authorized keys file for user: core Dec 16 14:13:41.409406 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 14:13:41.409406 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 16 14:13:41.601433 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 14:13:41.911471 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 14:13:41.911471 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 14:13:41.915340 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 14:13:41.915340 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 14:13:41.915340 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 14:13:41.915340 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 14:13:41.915340 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 14:13:41.915340 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 14:13:41.915340 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 14:13:41.915340 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 14:13:41.915340 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 14:13:41.915340 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 14:13:41.926757 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 14:13:41.926757 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 14:13:41.926757 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Dec 16 14:13:42.457369 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 14:13:44.810478 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 14:13:44.810478 ignition[1071]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 14:13:44.813368 ignition[1071]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 14:13:44.814683 ignition[1071]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 14:13:44.814683 ignition[1071]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 14:13:44.814683 ignition[1071]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 14:13:44.814683 ignition[1071]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 14:13:44.814683 ignition[1071]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 14:13:44.823340 ignition[1071]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 14:13:44.823340 ignition[1071]: INFO : files: files passed Dec 16 14:13:44.823340 ignition[1071]: INFO : Ignition finished successfully Dec 16 14:13:44.833483 kernel: audit: type=1130 audit(1765894424.825:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:44.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:44.822543 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 14:13:44.830391 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 14:13:44.835243 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 14:13:44.848836 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 14:13:44.849031 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 14:13:44.863442 kernel: audit: type=1130 audit(1765894424.851:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:44.863492 kernel: audit: type=1131 audit(1765894424.851:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:44.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:44.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:44.868173 initrd-setup-root-after-ignition[1102]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 14:13:44.868173 initrd-setup-root-after-ignition[1102]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 14:13:44.871973 initrd-setup-root-after-ignition[1105]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 14:13:44.875940 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 14:13:44.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:44.881355 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 14:13:44.885778 kernel: audit: type=1130 audit(1765894424.877:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:44.886004 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 14:13:44.948994 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 14:13:44.949234 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 14:13:44.961454 kernel: audit: type=1130 audit(1765894424.950:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:44.961506 kernel: audit: type=1131 audit(1765894424.950:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:44.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:44.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:44.951340 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 14:13:44.962108 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 14:13:44.963844 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 14:13:44.966243 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 14:13:45.000492 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 14:13:45.007177 kernel: audit: type=1130 audit(1765894425.001:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.004233 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 14:13:45.032932 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 14:13:45.034754 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 14:13:45.035762 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 14:13:45.037521 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 14:13:45.039123 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 14:13:45.046187 kernel: audit: type=1131 audit(1765894425.040:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.039378 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 14:13:45.046172 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 14:13:45.047085 systemd[1]: Stopped target basic.target - Basic System. Dec 16 14:13:45.048575 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 14:13:45.050004 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 14:13:45.051336 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 14:13:45.052962 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 14:13:45.054548 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 14:13:45.056285 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 14:13:45.057773 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 14:13:45.059413 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 14:13:45.060913 systemd[1]: Stopped target swap.target - Swaps. Dec 16 14:13:45.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.062471 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 14:13:45.062686 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 14:13:45.064194 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 14:13:45.065173 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 14:13:45.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.066568 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 14:13:45.066971 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 14:13:45.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.068193 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 14:13:45.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.068480 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 14:13:45.070364 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 14:13:45.070601 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 14:13:45.072190 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 14:13:45.072404 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 14:13:45.076140 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 14:13:45.079242 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 14:13:45.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.081391 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 14:13:45.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.081579 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 14:13:45.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.084256 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 14:13:45.084498 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 14:13:45.086310 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 14:13:45.086540 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 14:13:45.101271 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 14:13:45.101434 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 14:13:45.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.116755 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 14:13:45.121600 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 14:13:45.122158 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 14:13:45.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.129947 ignition[1126]: INFO : Ignition 2.22.0 Dec 16 14:13:45.129947 ignition[1126]: INFO : Stage: umount Dec 16 14:13:45.131633 ignition[1126]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 14:13:45.131633 ignition[1126]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 14:13:45.131633 ignition[1126]: INFO : umount: umount passed Dec 16 14:13:45.131633 ignition[1126]: INFO : Ignition finished successfully Dec 16 14:13:45.134235 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 14:13:45.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.134466 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 14:13:45.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.135853 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 14:13:45.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.135939 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 14:13:45.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.136866 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 14:13:45.136933 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 14:13:45.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.138188 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 14:13:45.138261 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 14:13:45.139572 systemd[1]: Stopped target network.target - Network. Dec 16 14:13:45.140767 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 14:13:45.140837 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 14:13:45.142197 systemd[1]: Stopped target paths.target - Path Units. Dec 16 14:13:45.143403 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 14:13:45.145200 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 14:13:45.146324 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 14:13:45.147708 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 14:13:45.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.149178 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 14:13:45.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.149249 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 14:13:45.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.150392 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 14:13:45.150452 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 14:13:45.151829 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 14:13:45.151886 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 14:13:45.153489 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 14:13:45.153582 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 14:13:45.154811 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 14:13:45.169000 audit: BPF prog-id=9 op=UNLOAD Dec 16 14:13:45.154880 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 14:13:45.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.156180 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 14:13:45.156262 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 14:13:45.157709 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 14:13:45.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.159821 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 14:13:45.174000 audit: BPF prog-id=6 op=UNLOAD Dec 16 14:13:45.167124 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 14:13:45.167444 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 14:13:45.171714 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 14:13:45.171934 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 14:13:45.174966 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 14:13:45.180891 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 14:13:45.180985 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 14:13:45.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.184138 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 14:13:45.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.185364 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 14:13:45.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.185449 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 14:13:45.188130 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 14:13:45.188209 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 14:13:45.191497 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 14:13:45.191574 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 14:13:45.192829 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 14:13:45.204841 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 14:13:45.205145 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 14:13:45.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.207187 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 14:13:45.207306 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 14:13:45.210807 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 14:13:45.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.210880 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 14:13:45.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.211578 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 14:13:45.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.211662 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 14:13:45.213415 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 14:13:45.213490 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 14:13:45.215248 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 14:13:45.215329 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 14:13:45.222214 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 14:13:45.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.222924 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 14:13:45.223002 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 14:13:45.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.225763 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 14:13:45.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.225867 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 14:13:45.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.228063 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 14:13:45.228141 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 14:13:45.229821 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 14:13:45.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.229887 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 14:13:45.233093 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 14:13:45.233189 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 14:13:45.243374 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 14:13:45.243548 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 14:13:45.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.253335 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 14:13:45.253519 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 14:13:45.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:45.255207 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 14:13:45.257217 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 14:13:45.282013 systemd[1]: Switching root. Dec 16 14:13:45.330243 systemd-journald[330]: Journal stopped Dec 16 14:13:46.934868 systemd-journald[330]: Received SIGTERM from PID 1 (systemd). Dec 16 14:13:46.936937 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 14:13:46.936986 kernel: SELinux: policy capability open_perms=1 Dec 16 14:13:46.937053 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 14:13:46.937082 kernel: SELinux: policy capability always_check_network=0 Dec 16 14:13:46.937109 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 14:13:46.937168 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 14:13:46.937202 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 14:13:46.937250 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 14:13:46.937286 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 14:13:46.937330 systemd[1]: Successfully loaded SELinux policy in 73.059ms. Dec 16 14:13:46.937376 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.516ms. Dec 16 14:13:46.937400 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 14:13:46.937428 systemd[1]: Detected virtualization kvm. Dec 16 14:13:46.937466 systemd[1]: Detected architecture x86-64. Dec 16 14:13:46.937500 systemd[1]: Detected first boot. Dec 16 14:13:46.937537 systemd[1]: Hostname set to . Dec 16 14:13:46.937573 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 14:13:46.937608 zram_generator::config[1170]: No configuration found. Dec 16 14:13:46.937654 kernel: Guest personality initialized and is inactive Dec 16 14:13:46.937680 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 14:13:46.937713 kernel: Initialized host personality Dec 16 14:13:46.937734 kernel: NET: Registered PF_VSOCK protocol family Dec 16 14:13:46.937760 systemd[1]: Populated /etc with preset unit settings. Dec 16 14:13:46.937787 kernel: kauditd_printk_skb: 43 callbacks suppressed Dec 16 14:13:46.937818 kernel: audit: type=1334 audit(1765894426.462:91): prog-id=12 op=LOAD Dec 16 14:13:46.937838 kernel: audit: type=1334 audit(1765894426.462:92): prog-id=3 op=UNLOAD Dec 16 14:13:46.937866 kernel: audit: type=1334 audit(1765894426.462:93): prog-id=13 op=LOAD Dec 16 14:13:46.937890 kernel: audit: type=1334 audit(1765894426.462:94): prog-id=14 op=LOAD Dec 16 14:13:46.937910 kernel: audit: type=1334 audit(1765894426.462:95): prog-id=4 op=UNLOAD Dec 16 14:13:46.937936 kernel: audit: type=1334 audit(1765894426.462:96): prog-id=5 op=UNLOAD Dec 16 14:13:46.937972 kernel: audit: type=1131 audit(1765894426.466:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:46.938001 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 14:13:46.938052 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 14:13:46.938091 kernel: audit: type=1130 audit(1765894426.479:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:46.938115 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 14:13:46.938157 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 14:13:46.938199 kernel: audit: type=1131 audit(1765894426.479:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:46.938220 kernel: audit: type=1334 audit(1765894426.491:100): prog-id=12 op=UNLOAD Dec 16 14:13:46.938248 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 14:13:46.938286 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 14:13:46.938310 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 14:13:46.938338 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 14:13:46.938367 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 14:13:46.938395 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 14:13:46.938432 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 14:13:46.938456 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 14:13:46.938483 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 14:13:46.938505 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 14:13:46.938543 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 14:13:46.938583 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 14:13:46.938608 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 14:13:46.938652 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 14:13:46.938674 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 14:13:46.938703 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 14:13:46.938731 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 14:13:46.938760 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 14:13:46.938782 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 14:13:46.938803 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 14:13:46.938829 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 14:13:46.938857 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 14:13:46.938879 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 14:13:46.938906 systemd[1]: Reached target slices.target - Slice Units. Dec 16 14:13:46.938928 systemd[1]: Reached target swap.target - Swaps. Dec 16 14:13:46.938949 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 14:13:46.938970 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 14:13:46.938991 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 14:13:46.943105 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 14:13:46.943156 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 14:13:46.943182 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 14:13:46.943203 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 14:13:46.943225 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 14:13:46.943267 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 14:13:46.943297 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 14:13:46.943333 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 14:13:46.943356 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 14:13:46.943384 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 14:13:46.943415 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 14:13:46.943452 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 14:13:46.943475 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 14:13:46.943507 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 14:13:46.943553 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 14:13:46.943582 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 14:13:46.943610 systemd[1]: Reached target machines.target - Containers. Dec 16 14:13:46.943631 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 14:13:46.943651 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 14:13:46.943677 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 14:13:46.943698 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 14:13:46.943730 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 14:13:46.943758 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 14:13:46.943779 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 14:13:46.943799 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 14:13:46.943820 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 14:13:46.943846 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 14:13:46.943888 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 14:13:46.943911 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 14:13:46.943953 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 14:13:46.943982 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 14:13:46.944045 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 14:13:46.944069 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 14:13:46.944099 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 14:13:46.944127 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 14:13:46.944155 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 14:13:46.944178 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 14:13:46.944199 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 14:13:46.944233 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 14:13:46.944266 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 14:13:46.944288 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 14:13:46.944309 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 14:13:46.944350 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 14:13:46.944382 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 14:13:46.944406 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 14:13:46.944436 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 14:13:46.944470 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 14:13:46.944493 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 14:13:46.944532 kernel: fuse: init (API version 7.41) Dec 16 14:13:46.944556 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 14:13:46.944576 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 14:13:46.944597 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 14:13:46.944631 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 14:13:46.944660 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 14:13:46.944694 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 14:13:46.944724 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 14:13:46.944757 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 14:13:46.944788 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 14:13:46.944818 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 14:13:46.944847 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 14:13:46.944868 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 14:13:46.944889 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 14:13:46.944915 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 14:13:46.944947 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 14:13:46.944983 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 14:13:46.945003 kernel: ACPI: bus type drm_connector registered Dec 16 14:13:46.946537 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 14:13:46.946610 systemd-journald[1265]: Collecting audit messages is enabled. Dec 16 14:13:46.950980 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 14:13:46.954108 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 14:13:46.954151 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 14:13:46.954174 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 14:13:46.954197 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 14:13:46.954228 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 14:13:46.954265 systemd-journald[1265]: Journal started Dec 16 14:13:46.954308 systemd-journald[1265]: Runtime Journal (/run/log/journal/e45ca4d8db314d859d2bff6a0ee3fda0) is 4.7M, max 37.8M, 33M free. Dec 16 14:13:46.577000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 14:13:46.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:46.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:46.744000 audit: BPF prog-id=14 op=UNLOAD Dec 16 14:13:46.744000 audit: BPF prog-id=13 op=UNLOAD Dec 16 14:13:46.745000 audit: BPF prog-id=15 op=LOAD Dec 16 14:13:46.745000 audit: BPF prog-id=16 op=LOAD Dec 16 14:13:46.745000 audit: BPF prog-id=17 op=LOAD Dec 16 14:13:46.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:46.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:46.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:46.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:46.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:46.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:46.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:46.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:46.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:46.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:46.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:46.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:46.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:46.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:46.928000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 14:13:46.928000 audit[1265]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd3ca35260 a2=4000 a3=0 items=0 ppid=1 pid=1265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:13:46.928000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 14:13:46.962765 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 14:13:46.442989 systemd[1]: Queued start job for default target multi-user.target. Dec 16 14:13:46.464665 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 14:13:46.466439 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 14:13:46.970738 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 14:13:46.977063 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 14:13:46.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:46.976392 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 14:13:46.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:46.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:46.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:46.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:46.978933 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 14:13:46.979313 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 14:13:46.980543 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 14:13:46.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:46.988994 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 14:13:47.012471 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 14:13:47.017337 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 14:13:47.023282 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 14:13:47.045040 kernel: loop1: detected capacity change from 0 to 111544 Dec 16 14:13:47.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:47.066076 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 14:13:47.078742 systemd-journald[1265]: Time spent on flushing to /var/log/journal/e45ca4d8db314d859d2bff6a0ee3fda0 is 69.010ms for 1312 entries. Dec 16 14:13:47.078742 systemd-journald[1265]: System Journal (/var/log/journal/e45ca4d8db314d859d2bff6a0ee3fda0) is 8M, max 588.1M, 580.1M free. Dec 16 14:13:47.164199 systemd-journald[1265]: Received client request to flush runtime journal. Dec 16 14:13:47.164277 kernel: loop2: detected capacity change from 0 to 119256 Dec 16 14:13:47.164309 kernel: loop3: detected capacity change from 0 to 8 Dec 16 14:13:47.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:47.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:47.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:47.083095 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 14:13:47.090232 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Dec 16 14:13:47.090264 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Dec 16 14:13:47.105007 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 14:13:47.112356 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 14:13:47.159579 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 14:13:47.166833 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 14:13:47.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:47.178043 kernel: loop4: detected capacity change from 0 to 229808 Dec 16 14:13:47.194700 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 14:13:47.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:47.197000 audit: BPF prog-id=18 op=LOAD Dec 16 14:13:47.197000 audit: BPF prog-id=19 op=LOAD Dec 16 14:13:47.198000 audit: BPF prog-id=20 op=LOAD Dec 16 14:13:47.201292 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 14:13:47.203000 audit: BPF prog-id=21 op=LOAD Dec 16 14:13:47.205334 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 14:13:47.212320 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 14:13:47.218033 kernel: loop5: detected capacity change from 0 to 111544 Dec 16 14:13:47.231000 audit: BPF prog-id=22 op=LOAD Dec 16 14:13:47.232000 audit: BPF prog-id=23 op=LOAD Dec 16 14:13:47.232000 audit: BPF prog-id=24 op=LOAD Dec 16 14:13:47.233291 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 14:13:47.236000 audit: BPF prog-id=25 op=LOAD Dec 16 14:13:47.236000 audit: BPF prog-id=26 op=LOAD Dec 16 14:13:47.236000 audit: BPF prog-id=27 op=LOAD Dec 16 14:13:47.241322 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 14:13:47.247123 kernel: loop6: detected capacity change from 0 to 119256 Dec 16 14:13:47.261825 systemd-tmpfiles[1329]: ACLs are not supported, ignoring. Dec 16 14:13:47.261846 systemd-tmpfiles[1329]: ACLs are not supported, ignoring. Dec 16 14:13:47.274781 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 14:13:47.277415 kernel: loop7: detected capacity change from 0 to 8 Dec 16 14:13:47.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:47.292100 kernel: loop1: detected capacity change from 0 to 229808 Dec 16 14:13:47.311823 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 14:13:47.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:47.322433 (sd-merge)[1330]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-openstack.raw'. Dec 16 14:13:47.333011 (sd-merge)[1330]: Merged extensions into '/usr'. Dec 16 14:13:47.342253 systemd[1]: Reload requested from client PID 1288 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 14:13:47.342291 systemd[1]: Reloading... Dec 16 14:13:47.362484 systemd-nsresourced[1331]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 14:13:47.490044 zram_generator::config[1376]: No configuration found. Dec 16 14:13:47.598265 systemd-oomd[1327]: No swap; memory pressure usage will be degraded Dec 16 14:13:47.604285 systemd-resolved[1328]: Positive Trust Anchors: Dec 16 14:13:47.604303 systemd-resolved[1328]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 14:13:47.604310 systemd-resolved[1328]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 14:13:47.604352 systemd-resolved[1328]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 14:13:47.625610 systemd-resolved[1328]: Using system hostname 'srv-6slrx.gb1.brightbox.com'. Dec 16 14:13:47.866203 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 14:13:47.866375 systemd[1]: Reloading finished in 523 ms. Dec 16 14:13:47.896566 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 14:13:47.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:47.897926 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 14:13:47.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:47.898908 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 14:13:47.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:47.900120 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 14:13:47.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:47.906270 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 14:13:47.909452 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 14:13:47.911702 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 14:13:47.920563 systemd[1]: Starting ensure-sysext.service... Dec 16 14:13:47.928284 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 14:13:47.933000 audit: BPF prog-id=28 op=LOAD Dec 16 14:13:47.936000 audit: BPF prog-id=22 op=UNLOAD Dec 16 14:13:47.936000 audit: BPF prog-id=29 op=LOAD Dec 16 14:13:47.943000 audit: BPF prog-id=30 op=LOAD Dec 16 14:13:47.943000 audit: BPF prog-id=23 op=UNLOAD Dec 16 14:13:47.943000 audit: BPF prog-id=24 op=UNLOAD Dec 16 14:13:47.944000 audit: BPF prog-id=31 op=LOAD Dec 16 14:13:47.944000 audit: BPF prog-id=18 op=UNLOAD Dec 16 14:13:47.944000 audit: BPF prog-id=32 op=LOAD Dec 16 14:13:47.944000 audit: BPF prog-id=33 op=LOAD Dec 16 14:13:47.945000 audit: BPF prog-id=19 op=UNLOAD Dec 16 14:13:47.945000 audit: BPF prog-id=20 op=UNLOAD Dec 16 14:13:47.947000 audit: BPF prog-id=34 op=LOAD Dec 16 14:13:47.947000 audit: BPF prog-id=15 op=UNLOAD Dec 16 14:13:47.947000 audit: BPF prog-id=35 op=LOAD Dec 16 14:13:47.947000 audit: BPF prog-id=36 op=LOAD Dec 16 14:13:47.947000 audit: BPF prog-id=16 op=UNLOAD Dec 16 14:13:47.947000 audit: BPF prog-id=17 op=UNLOAD Dec 16 14:13:47.948000 audit: BPF prog-id=37 op=LOAD Dec 16 14:13:47.949000 audit: BPF prog-id=21 op=UNLOAD Dec 16 14:13:47.954000 audit: BPF prog-id=38 op=LOAD Dec 16 14:13:47.954000 audit: BPF prog-id=25 op=UNLOAD Dec 16 14:13:47.954000 audit: BPF prog-id=39 op=LOAD Dec 16 14:13:47.954000 audit: BPF prog-id=40 op=LOAD Dec 16 14:13:47.955000 audit: BPF prog-id=26 op=UNLOAD Dec 16 14:13:47.955000 audit: BPF prog-id=27 op=UNLOAD Dec 16 14:13:47.957591 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 14:13:47.959839 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 14:13:47.962893 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 14:13:47.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:47.968000 audit: BPF prog-id=8 op=UNLOAD Dec 16 14:13:47.968000 audit: BPF prog-id=7 op=UNLOAD Dec 16 14:13:47.969000 audit: BPF prog-id=41 op=LOAD Dec 16 14:13:47.970000 audit: BPF prog-id=42 op=LOAD Dec 16 14:13:47.973412 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 14:13:47.973940 systemd-tmpfiles[1434]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 14:13:47.973995 systemd-tmpfiles[1434]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 14:13:47.974594 systemd[1]: Reload requested from client PID 1433 ('systemctl') (unit ensure-sysext.service)... Dec 16 14:13:47.974617 systemd[1]: Reloading... Dec 16 14:13:47.974878 systemd-tmpfiles[1434]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 14:13:47.977282 systemd-tmpfiles[1434]: ACLs are not supported, ignoring. Dec 16 14:13:47.977498 systemd-tmpfiles[1434]: ACLs are not supported, ignoring. Dec 16 14:13:47.986156 systemd-tmpfiles[1434]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 14:13:47.986298 systemd-tmpfiles[1434]: Skipping /boot Dec 16 14:13:48.007247 systemd-tmpfiles[1434]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 14:13:48.007446 systemd-tmpfiles[1434]: Skipping /boot Dec 16 14:13:48.070362 systemd-udevd[1439]: Using default interface naming scheme 'v257'. Dec 16 14:13:48.090101 zram_generator::config[1469]: No configuration found. Dec 16 14:13:48.342078 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 14:13:48.374047 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Dec 16 14:13:48.390069 kernel: ACPI: button: Power Button [PWRF] Dec 16 14:13:48.502049 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 16 14:13:48.512047 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 16 14:13:48.527059 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 14:13:48.527260 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 14:13:48.528310 systemd[1]: Reloading finished in 553 ms. Dec 16 14:13:48.545379 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 14:13:48.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:48.552000 audit: BPF prog-id=43 op=LOAD Dec 16 14:13:48.552000 audit: BPF prog-id=31 op=UNLOAD Dec 16 14:13:48.552000 audit: BPF prog-id=44 op=LOAD Dec 16 14:13:48.552000 audit: BPF prog-id=45 op=LOAD Dec 16 14:13:48.552000 audit: BPF prog-id=32 op=UNLOAD Dec 16 14:13:48.552000 audit: BPF prog-id=33 op=UNLOAD Dec 16 14:13:48.554000 audit: BPF prog-id=46 op=LOAD Dec 16 14:13:48.554000 audit: BPF prog-id=28 op=UNLOAD Dec 16 14:13:48.554000 audit: BPF prog-id=47 op=LOAD Dec 16 14:13:48.555000 audit: BPF prog-id=48 op=LOAD Dec 16 14:13:48.555000 audit: BPF prog-id=29 op=UNLOAD Dec 16 14:13:48.555000 audit: BPF prog-id=30 op=UNLOAD Dec 16 14:13:48.557000 audit: BPF prog-id=49 op=LOAD Dec 16 14:13:48.561000 audit: BPF prog-id=37 op=UNLOAD Dec 16 14:13:48.562000 audit: BPF prog-id=50 op=LOAD Dec 16 14:13:48.562000 audit: BPF prog-id=38 op=UNLOAD Dec 16 14:13:48.562000 audit: BPF prog-id=51 op=LOAD Dec 16 14:13:48.562000 audit: BPF prog-id=52 op=LOAD Dec 16 14:13:48.562000 audit: BPF prog-id=39 op=UNLOAD Dec 16 14:13:48.562000 audit: BPF prog-id=40 op=UNLOAD Dec 16 14:13:48.562000 audit: BPF prog-id=53 op=LOAD Dec 16 14:13:48.562000 audit: BPF prog-id=54 op=LOAD Dec 16 14:13:48.562000 audit: BPF prog-id=41 op=UNLOAD Dec 16 14:13:48.562000 audit: BPF prog-id=42 op=UNLOAD Dec 16 14:13:48.565000 audit: BPF prog-id=55 op=LOAD Dec 16 14:13:48.565000 audit: BPF prog-id=34 op=UNLOAD Dec 16 14:13:48.566000 audit: BPF prog-id=56 op=LOAD Dec 16 14:13:48.566000 audit: BPF prog-id=57 op=LOAD Dec 16 14:13:48.566000 audit: BPF prog-id=35 op=UNLOAD Dec 16 14:13:48.566000 audit: BPF prog-id=36 op=UNLOAD Dec 16 14:13:48.570080 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 14:13:48.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:48.645638 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 14:13:48.647792 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 14:13:48.652364 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 14:13:48.653887 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 14:13:48.655408 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 14:13:48.663307 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 14:13:48.667632 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 14:13:48.669576 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 14:13:48.669840 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 14:13:48.679776 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 14:13:48.689766 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 14:13:48.690689 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 14:13:48.693665 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 14:13:48.697000 audit: BPF prog-id=58 op=LOAD Dec 16 14:13:48.701455 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 14:13:48.710445 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 14:13:48.713188 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 14:13:48.721932 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 14:13:48.723343 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 14:13:48.744299 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 14:13:48.745243 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 14:13:48.745479 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 14:13:48.745617 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 14:13:48.745809 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 14:13:48.757810 systemd[1]: Finished ensure-sysext.service. Dec 16 14:13:48.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:48.766000 audit: BPF prog-id=59 op=LOAD Dec 16 14:13:48.770391 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 14:13:48.772767 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 14:13:48.773765 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 14:13:48.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:48.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:48.775518 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 14:13:48.776542 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 14:13:48.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:48.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:48.778735 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 14:13:48.780148 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 14:13:48.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:48.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:48.783758 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 14:13:48.783852 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 14:13:48.828509 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 14:13:48.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:48.844000 audit[1573]: SYSTEM_BOOT pid=1573 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 14:13:48.853640 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 14:13:48.854056 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 14:13:48.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:48.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:48.869875 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 14:13:48.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:48.927902 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 14:13:48.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:48.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:13:48.934586 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 14:13:48.937890 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 14:13:48.944445 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 14:13:48.985000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 14:13:48.985000 audit[1607]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd76782830 a2=420 a3=0 items=0 ppid=1559 pid=1607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:13:48.985000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 14:13:48.986070 augenrules[1607]: No rules Dec 16 14:13:48.986552 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 14:13:48.986889 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 14:13:49.249254 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 14:13:49.294858 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 14:13:49.296001 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 14:13:49.315263 systemd-networkd[1572]: lo: Link UP Dec 16 14:13:49.315276 systemd-networkd[1572]: lo: Gained carrier Dec 16 14:13:49.317918 systemd-networkd[1572]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 14:13:49.317932 systemd-networkd[1572]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 14:13:49.318214 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 14:13:49.319872 systemd[1]: Reached target network.target - Network. Dec 16 14:13:49.323257 systemd-networkd[1572]: eth0: Link UP Dec 16 14:13:49.323660 systemd-networkd[1572]: eth0: Gained carrier Dec 16 14:13:49.323687 systemd-networkd[1572]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 14:13:49.325222 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 14:13:49.329912 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 14:13:49.349580 systemd-networkd[1572]: eth0: DHCPv4 address 10.230.52.194/30, gateway 10.230.52.193 acquired from 10.230.52.193 Dec 16 14:13:49.353398 systemd-timesyncd[1578]: Network configuration changed, trying to establish connection. Dec 16 14:13:49.388442 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 14:13:49.524329 ldconfig[1565]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 14:13:49.528574 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 14:13:49.531871 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 14:13:49.555133 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 14:13:49.556604 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 14:13:49.557507 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 14:13:49.558347 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 14:13:49.559286 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 14:13:49.560279 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 14:13:49.561204 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 14:13:49.561974 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 14:13:49.562894 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 14:13:49.563666 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 14:13:49.564492 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 14:13:49.564550 systemd[1]: Reached target paths.target - Path Units. Dec 16 14:13:49.565199 systemd[1]: Reached target timers.target - Timer Units. Dec 16 14:13:49.567153 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 14:13:49.569524 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 14:13:49.573557 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 14:13:49.580326 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 14:13:49.581102 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 14:13:49.600848 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 14:13:49.602010 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 14:13:49.603715 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 14:13:49.605532 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 14:13:49.606224 systemd[1]: Reached target basic.target - Basic System. Dec 16 14:13:49.606872 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 14:13:49.606923 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 14:13:49.608839 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 14:13:49.614247 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 14:13:49.619475 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 14:13:49.626261 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 14:13:49.628574 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 14:13:49.633047 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 14:13:49.633432 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 14:13:49.634188 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 14:13:49.635606 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 14:13:49.643288 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 14:13:49.648299 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 14:13:49.653874 jq[1633]: false Dec 16 14:13:49.655894 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 14:13:49.665714 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 14:13:49.670487 extend-filesystems[1634]: Found /dev/vda6 Dec 16 14:13:49.677952 extend-filesystems[1634]: Found /dev/vda9 Dec 16 14:13:49.680920 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 14:13:49.681787 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 14:13:49.685386 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 14:13:49.688932 extend-filesystems[1634]: Checking size of /dev/vda9 Dec 16 14:13:49.690236 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 14:13:49.695085 oslogin_cache_refresh[1636]: Refreshing passwd entry cache Dec 16 14:13:49.695759 google_oslogin_nss_cache[1636]: oslogin_cache_refresh[1636]: Refreshing passwd entry cache Dec 16 14:13:49.696354 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 14:13:49.705956 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 14:13:49.709263 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 14:13:49.709633 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 14:13:49.711186 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 14:13:49.711503 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 14:13:49.721151 extend-filesystems[1634]: Resized partition /dev/vda9 Dec 16 14:13:49.733120 extend-filesystems[1666]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 14:13:49.751052 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 14138363 blocks Dec 16 14:13:49.754559 update_engine[1648]: I20251216 14:13:49.754462 1648 main.cc:92] Flatcar Update Engine starting Dec 16 14:13:49.765977 jq[1650]: true Dec 16 14:13:49.766241 google_oslogin_nss_cache[1636]: oslogin_cache_refresh[1636]: Failure getting users, quitting Dec 16 14:13:49.766241 google_oslogin_nss_cache[1636]: oslogin_cache_refresh[1636]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 14:13:49.766241 google_oslogin_nss_cache[1636]: oslogin_cache_refresh[1636]: Refreshing group entry cache Dec 16 14:13:49.761352 oslogin_cache_refresh[1636]: Failure getting users, quitting Dec 16 14:13:49.761385 oslogin_cache_refresh[1636]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 14:13:49.761447 oslogin_cache_refresh[1636]: Refreshing group entry cache Dec 16 14:13:49.769065 google_oslogin_nss_cache[1636]: oslogin_cache_refresh[1636]: Failure getting groups, quitting Dec 16 14:13:49.769065 google_oslogin_nss_cache[1636]: oslogin_cache_refresh[1636]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 14:13:49.767073 oslogin_cache_refresh[1636]: Failure getting groups, quitting Dec 16 14:13:49.767091 oslogin_cache_refresh[1636]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 14:13:49.771538 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 14:13:49.771964 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 14:13:49.794837 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 14:13:49.795295 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 14:13:49.799113 tar[1654]: linux-amd64/LICENSE Dec 16 14:13:49.809521 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 14:13:49.809162 dbus-daemon[1631]: [system] SELinux support is enabled Dec 16 14:13:49.816945 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 14:13:49.816990 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 14:13:49.818019 dbus-daemon[1631]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1572 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 16 14:13:49.818689 tar[1654]: linux-amd64/helm Dec 16 14:13:49.818661 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 14:13:49.818684 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 14:13:49.825408 jq[1679]: true Dec 16 14:13:49.830403 update_engine[1648]: I20251216 14:13:49.830341 1648 update_check_scheduler.cc:74] Next update check in 7m27s Dec 16 14:13:49.831127 systemd[1]: Started update-engine.service - Update Engine. Dec 16 14:13:49.832203 dbus-daemon[1631]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 14:13:49.869129 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 14:13:49.883807 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 16 14:13:49.958241 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Dec 16 14:13:49.982848 extend-filesystems[1666]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 14:13:49.982848 extend-filesystems[1666]: old_desc_blocks = 1, new_desc_blocks = 7 Dec 16 14:13:49.982848 extend-filesystems[1666]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Dec 16 14:13:49.992230 extend-filesystems[1634]: Resized filesystem in /dev/vda9 Dec 16 14:13:49.984137 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 14:13:49.984589 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 14:13:50.036758 systemd-logind[1646]: Watching system buttons on /dev/input/event3 (Power Button) Dec 16 14:13:50.036801 systemd-logind[1646]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 14:13:50.045043 systemd-logind[1646]: New seat seat0. Dec 16 14:13:50.054035 bash[1707]: Updated "/home/core/.ssh/authorized_keys" Dec 16 14:13:50.051951 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 14:13:50.057235 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 14:13:50.063470 systemd[1]: Starting sshkeys.service... Dec 16 14:13:50.115971 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 14:13:50.123500 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 14:13:50.158860 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 14:13:50.195607 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 14:13:50.198519 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 16 14:13:50.224711 dbus-daemon[1631]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 16 14:13:50.229066 dbus-daemon[1631]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1688 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 16 14:13:50.236660 systemd[1]: Starting polkit.service - Authorization Manager... Dec 16 14:13:50.368169 locksmithd[1687]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 14:13:50.406087 containerd[1683]: time="2025-12-16T14:13:50Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 14:13:50.409283 containerd[1683]: time="2025-12-16T14:13:50.409250625Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 14:13:50.414807 sshd_keygen[1673]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 14:13:50.448998 containerd[1683]: time="2025-12-16T14:13:50.447534567Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="25.624µs" Dec 16 14:13:50.448055 polkitd[1721]: Started polkitd version 126 Dec 16 14:13:50.449614 containerd[1683]: time="2025-12-16T14:13:50.449570181Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 14:13:50.449823 containerd[1683]: time="2025-12-16T14:13:50.449741347Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 14:13:50.450902 containerd[1683]: time="2025-12-16T14:13:50.450757483Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 14:13:50.452094 containerd[1683]: time="2025-12-16T14:13:50.451645443Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 14:13:50.452094 containerd[1683]: time="2025-12-16T14:13:50.451688412Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 14:13:50.452094 containerd[1683]: time="2025-12-16T14:13:50.451798693Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 14:13:50.452094 containerd[1683]: time="2025-12-16T14:13:50.451837000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 14:13:50.452450 containerd[1683]: time="2025-12-16T14:13:50.452407639Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 14:13:50.452595 containerd[1683]: time="2025-12-16T14:13:50.452570865Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 14:13:50.452701 containerd[1683]: time="2025-12-16T14:13:50.452677750Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 14:13:50.452792 containerd[1683]: time="2025-12-16T14:13:50.452771854Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 14:13:50.453486 containerd[1683]: time="2025-12-16T14:13:50.453459334Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 14:13:50.454077 containerd[1683]: time="2025-12-16T14:13:50.453585554Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 14:13:50.454077 containerd[1683]: time="2025-12-16T14:13:50.453755991Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 14:13:50.455573 polkitd[1721]: Loading rules from directory /etc/polkit-1/rules.d Dec 16 14:13:50.456596 containerd[1683]: time="2025-12-16T14:13:50.455626846Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 14:13:50.456596 containerd[1683]: time="2025-12-16T14:13:50.455707671Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 14:13:50.456596 containerd[1683]: time="2025-12-16T14:13:50.455728332Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 14:13:50.456596 containerd[1683]: time="2025-12-16T14:13:50.455793909Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 14:13:50.456596 containerd[1683]: time="2025-12-16T14:13:50.456205361Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 14:13:50.456596 containerd[1683]: time="2025-12-16T14:13:50.456299952Z" level=info msg="metadata content store policy set" policy=shared Dec 16 14:13:50.457324 polkitd[1721]: Loading rules from directory /run/polkit-1/rules.d Dec 16 14:13:50.457481 polkitd[1721]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 14:13:50.458234 polkitd[1721]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 16 14:13:50.458351 polkitd[1721]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 14:13:50.458524 polkitd[1721]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 16 14:13:50.461042 containerd[1683]: time="2025-12-16T14:13:50.459515721Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 14:13:50.461042 containerd[1683]: time="2025-12-16T14:13:50.459587960Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 14:13:50.461042 containerd[1683]: time="2025-12-16T14:13:50.459686632Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 14:13:50.461042 containerd[1683]: time="2025-12-16T14:13:50.459714438Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 14:13:50.461042 containerd[1683]: time="2025-12-16T14:13:50.459735514Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 14:13:50.461042 containerd[1683]: time="2025-12-16T14:13:50.459758518Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 14:13:50.461042 containerd[1683]: time="2025-12-16T14:13:50.459777398Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 14:13:50.461042 containerd[1683]: time="2025-12-16T14:13:50.459792798Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 14:13:50.461042 containerd[1683]: time="2025-12-16T14:13:50.459812379Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 14:13:50.461042 containerd[1683]: time="2025-12-16T14:13:50.459841702Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 14:13:50.461042 containerd[1683]: time="2025-12-16T14:13:50.459858884Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 14:13:50.461042 containerd[1683]: time="2025-12-16T14:13:50.459877883Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 14:13:50.461042 containerd[1683]: time="2025-12-16T14:13:50.459923512Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 14:13:50.461042 containerd[1683]: time="2025-12-16T14:13:50.459948308Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 14:13:50.461488 containerd[1683]: time="2025-12-16T14:13:50.460116078Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 14:13:50.461488 containerd[1683]: time="2025-12-16T14:13:50.460166636Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 14:13:50.461488 containerd[1683]: time="2025-12-16T14:13:50.460191827Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 14:13:50.461488 containerd[1683]: time="2025-12-16T14:13:50.460209110Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 14:13:50.461488 containerd[1683]: time="2025-12-16T14:13:50.460225497Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 14:13:50.461488 containerd[1683]: time="2025-12-16T14:13:50.460241785Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 14:13:50.461488 containerd[1683]: time="2025-12-16T14:13:50.460259247Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 14:13:50.461488 containerd[1683]: time="2025-12-16T14:13:50.460287706Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 14:13:50.461488 containerd[1683]: time="2025-12-16T14:13:50.460306980Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 14:13:50.461488 containerd[1683]: time="2025-12-16T14:13:50.460323705Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 14:13:50.461488 containerd[1683]: time="2025-12-16T14:13:50.460340061Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 14:13:50.461488 containerd[1683]: time="2025-12-16T14:13:50.460383865Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 14:13:50.461989 polkitd[1721]: Finished loading, compiling and executing 2 rules Dec 16 14:13:50.462257 containerd[1683]: time="2025-12-16T14:13:50.460493700Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 14:13:50.462386 containerd[1683]: time="2025-12-16T14:13:50.462361177Z" level=info msg="Start snapshots syncer" Dec 16 14:13:50.462536 containerd[1683]: time="2025-12-16T14:13:50.462504074Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 14:13:50.463290 containerd[1683]: time="2025-12-16T14:13:50.463241248Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 14:13:50.463649 containerd[1683]: time="2025-12-16T14:13:50.463606579Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 14:13:50.463833 containerd[1683]: time="2025-12-16T14:13:50.463795706Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 14:13:50.464843 systemd[1]: Started polkit.service - Authorization Manager. Dec 16 14:13:50.467359 dbus-daemon[1631]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 16 14:13:50.469452 polkitd[1721]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 16 14:13:50.472005 containerd[1683]: time="2025-12-16T14:13:50.471965989Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 14:13:50.472088 containerd[1683]: time="2025-12-16T14:13:50.472043488Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 14:13:50.472088 containerd[1683]: time="2025-12-16T14:13:50.472074250Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 14:13:50.472198 containerd[1683]: time="2025-12-16T14:13:50.472099270Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 14:13:50.472198 containerd[1683]: time="2025-12-16T14:13:50.472139213Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 14:13:50.472198 containerd[1683]: time="2025-12-16T14:13:50.472189766Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 14:13:50.472312 containerd[1683]: time="2025-12-16T14:13:50.472214953Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 14:13:50.472312 containerd[1683]: time="2025-12-16T14:13:50.472238953Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 14:13:50.472312 containerd[1683]: time="2025-12-16T14:13:50.472259862Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 14:13:50.472401 containerd[1683]: time="2025-12-16T14:13:50.472322673Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 14:13:50.472401 containerd[1683]: time="2025-12-16T14:13:50.472353305Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 14:13:50.472401 containerd[1683]: time="2025-12-16T14:13:50.472369168Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 14:13:50.472401 containerd[1683]: time="2025-12-16T14:13:50.472389105Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 14:13:50.472532 containerd[1683]: time="2025-12-16T14:13:50.472408103Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 14:13:50.472532 containerd[1683]: time="2025-12-16T14:13:50.472429420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 14:13:50.472532 containerd[1683]: time="2025-12-16T14:13:50.472448365Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 14:13:50.472532 containerd[1683]: time="2025-12-16T14:13:50.472483181Z" level=info msg="runtime interface created" Dec 16 14:13:50.472532 containerd[1683]: time="2025-12-16T14:13:50.472494854Z" level=info msg="created NRI interface" Dec 16 14:13:50.472532 containerd[1683]: time="2025-12-16T14:13:50.472513625Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 14:13:50.472730 containerd[1683]: time="2025-12-16T14:13:50.472541294Z" level=info msg="Connect containerd service" Dec 16 14:13:50.472730 containerd[1683]: time="2025-12-16T14:13:50.472609088Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 14:13:50.476608 containerd[1683]: time="2025-12-16T14:13:50.476560009Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 14:13:50.486954 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 14:13:50.495324 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 14:13:50.500210 systemd[1]: Started sshd@0-10.230.52.194:22-139.178.89.65:33594.service - OpenSSH per-connection server daemon (139.178.89.65:33594). Dec 16 14:13:50.514671 systemd-hostnamed[1688]: Hostname set to (static) Dec 16 14:13:50.546559 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 14:13:50.548142 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 14:13:50.556560 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 14:13:50.613607 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 14:13:50.620614 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 14:13:50.624367 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 14:13:50.626457 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 14:13:50.662822 containerd[1683]: time="2025-12-16T14:13:50.662717055Z" level=info msg="Start subscribing containerd event" Dec 16 14:13:50.663987 containerd[1683]: time="2025-12-16T14:13:50.663114013Z" level=info msg="Start recovering state" Dec 16 14:13:50.664204 containerd[1683]: time="2025-12-16T14:13:50.663771821Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 14:13:50.664294 containerd[1683]: time="2025-12-16T14:13:50.664267647Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 14:13:50.665455 containerd[1683]: time="2025-12-16T14:13:50.665423849Z" level=info msg="Start event monitor" Dec 16 14:13:50.665572 containerd[1683]: time="2025-12-16T14:13:50.665547765Z" level=info msg="Start cni network conf syncer for default" Dec 16 14:13:50.665629 containerd[1683]: time="2025-12-16T14:13:50.665575549Z" level=info msg="Start streaming server" Dec 16 14:13:50.665629 containerd[1683]: time="2025-12-16T14:13:50.665605854Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 14:13:50.665629 containerd[1683]: time="2025-12-16T14:13:50.665618367Z" level=info msg="runtime interface starting up..." Dec 16 14:13:50.665722 containerd[1683]: time="2025-12-16T14:13:50.665630468Z" level=info msg="starting plugins..." Dec 16 14:13:50.665722 containerd[1683]: time="2025-12-16T14:13:50.665673292Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 14:13:50.665911 containerd[1683]: time="2025-12-16T14:13:50.665885926Z" level=info msg="containerd successfully booted in 0.262349s" Dec 16 14:13:50.666276 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 14:13:50.677029 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 14:13:50.816181 tar[1654]: linux-amd64/README.md Dec 16 14:13:50.833928 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 14:13:50.840206 systemd-networkd[1572]: eth0: Gained IPv6LL Dec 16 14:13:50.842158 systemd-timesyncd[1578]: Network configuration changed, trying to establish connection. Dec 16 14:13:50.844263 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 14:13:50.845929 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 14:13:50.849520 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 14:13:50.853411 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 14:13:50.889020 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 14:13:51.259048 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 14:13:51.374607 sshd[1744]: Accepted publickey for core from 139.178.89.65 port 33594 ssh2: RSA SHA256:HFgv0LODQ11hKhkpz13ZhYSt64haPZ/HUbDL52pAA0k Dec 16 14:13:51.377472 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:13:51.392812 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 14:13:51.395436 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 14:13:51.405676 systemd-logind[1646]: New session 1 of user core. Dec 16 14:13:51.428578 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 14:13:51.436117 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 14:13:51.449822 (systemd)[1783]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 14:13:51.455961 systemd-logind[1646]: New session c1 of user core. Dec 16 14:13:51.635510 systemd[1783]: Queued start job for default target default.target. Dec 16 14:13:51.641897 systemd[1783]: Created slice app.slice - User Application Slice. Dec 16 14:13:51.641955 systemd[1783]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 14:13:51.641980 systemd[1783]: Reached target paths.target - Paths. Dec 16 14:13:51.642699 systemd[1783]: Reached target timers.target - Timers. Dec 16 14:13:51.646184 systemd[1783]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 14:13:51.647679 systemd[1783]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 14:13:51.667160 systemd[1783]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 14:13:51.674931 systemd[1783]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 14:13:51.676367 systemd[1783]: Reached target sockets.target - Sockets. Dec 16 14:13:51.676440 systemd[1783]: Reached target basic.target - Basic System. Dec 16 14:13:51.676509 systemd[1783]: Reached target default.target - Main User Target. Dec 16 14:13:51.676580 systemd[1783]: Startup finished in 209ms. Dec 16 14:13:51.676783 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 14:13:51.683302 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 14:13:51.752787 systemd-timesyncd[1578]: Network configuration changed, trying to establish connection. Dec 16 14:13:51.753905 systemd-networkd[1572]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8d30:24:19ff:fee6:34c2/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8d30:24:19ff:fee6:34c2/64 assigned by NDisc. Dec 16 14:13:51.754098 systemd-networkd[1572]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 16 14:13:51.866519 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 14:13:51.878681 (kubelet)[1801]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 14:13:52.137586 systemd[1]: Started sshd@1-10.230.52.194:22-139.178.89.65:34316.service - OpenSSH per-connection server daemon (139.178.89.65:34316). Dec 16 14:13:52.476156 kubelet[1801]: E1216 14:13:52.475981 1801 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 14:13:52.478789 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 14:13:52.479005 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 14:13:52.479750 systemd[1]: kubelet.service: Consumed 1.039s CPU time, 266.9M memory peak. Dec 16 14:13:52.695049 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 14:13:52.917636 sshd[1807]: Accepted publickey for core from 139.178.89.65 port 34316 ssh2: RSA SHA256:HFgv0LODQ11hKhkpz13ZhYSt64haPZ/HUbDL52pAA0k Dec 16 14:13:52.920038 sshd-session[1807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:13:52.927829 systemd-logind[1646]: New session 2 of user core. Dec 16 14:13:52.943557 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 14:13:53.269078 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 14:13:53.360329 sshd[1813]: Connection closed by 139.178.89.65 port 34316 Dec 16 14:13:53.361364 sshd-session[1807]: pam_unix(sshd:session): session closed for user core Dec 16 14:13:53.366646 systemd[1]: sshd@1-10.230.52.194:22-139.178.89.65:34316.service: Deactivated successfully. Dec 16 14:13:53.368960 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 14:13:53.370465 systemd-logind[1646]: Session 2 logged out. Waiting for processes to exit. Dec 16 14:13:53.372191 systemd-logind[1646]: Removed session 2. Dec 16 14:13:53.520045 systemd[1]: Started sshd@2-10.230.52.194:22-139.178.89.65:34330.service - OpenSSH per-connection server daemon (139.178.89.65:34330). Dec 16 14:13:53.529143 systemd-timesyncd[1578]: Network configuration changed, trying to establish connection. Dec 16 14:13:54.313191 sshd[1820]: Accepted publickey for core from 139.178.89.65 port 34330 ssh2: RSA SHA256:HFgv0LODQ11hKhkpz13ZhYSt64haPZ/HUbDL52pAA0k Dec 16 14:13:54.314867 sshd-session[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:13:54.322802 systemd-logind[1646]: New session 3 of user core. Dec 16 14:13:54.332405 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 14:13:54.759898 sshd[1823]: Connection closed by 139.178.89.65 port 34330 Dec 16 14:13:54.760773 sshd-session[1820]: pam_unix(sshd:session): session closed for user core Dec 16 14:13:54.766156 systemd[1]: sshd@2-10.230.52.194:22-139.178.89.65:34330.service: Deactivated successfully. Dec 16 14:13:54.768800 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 14:13:54.770298 systemd-logind[1646]: Session 3 logged out. Waiting for processes to exit. Dec 16 14:13:54.771982 systemd-logind[1646]: Removed session 3. Dec 16 14:13:55.721160 login[1758]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 16 14:13:55.729848 systemd-logind[1646]: New session 4 of user core. Dec 16 14:13:55.737360 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 14:13:56.040706 login[1759]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 16 14:13:56.047604 systemd-logind[1646]: New session 5 of user core. Dec 16 14:13:56.062457 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 14:13:56.709066 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 14:13:56.722848 coreos-metadata[1630]: Dec 16 14:13:56.722 WARN failed to locate config-drive, using the metadata service API instead Dec 16 14:13:56.748030 coreos-metadata[1630]: Dec 16 14:13:56.747 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Dec 16 14:13:56.755640 coreos-metadata[1630]: Dec 16 14:13:56.755 INFO Fetch failed with 404: resource not found Dec 16 14:13:56.755640 coreos-metadata[1630]: Dec 16 14:13:56.755 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 16 14:13:56.756238 coreos-metadata[1630]: Dec 16 14:13:56.756 INFO Fetch successful Dec 16 14:13:56.756408 coreos-metadata[1630]: Dec 16 14:13:56.756 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 16 14:13:56.784085 coreos-metadata[1630]: Dec 16 14:13:56.783 INFO Fetch successful Dec 16 14:13:56.784085 coreos-metadata[1630]: Dec 16 14:13:56.784 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 16 14:13:56.800183 coreos-metadata[1630]: Dec 16 14:13:56.799 INFO Fetch successful Dec 16 14:13:56.800183 coreos-metadata[1630]: Dec 16 14:13:56.800 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 16 14:13:56.815447 coreos-metadata[1630]: Dec 16 14:13:56.815 INFO Fetch successful Dec 16 14:13:56.815447 coreos-metadata[1630]: Dec 16 14:13:56.815 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 16 14:13:56.837165 coreos-metadata[1630]: Dec 16 14:13:56.837 INFO Fetch successful Dec 16 14:13:56.876179 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 14:13:56.878124 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 14:13:57.284051 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 14:13:57.299440 coreos-metadata[1715]: Dec 16 14:13:57.299 WARN failed to locate config-drive, using the metadata service API instead Dec 16 14:13:57.320478 coreos-metadata[1715]: Dec 16 14:13:57.320 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 16 14:13:57.344245 coreos-metadata[1715]: Dec 16 14:13:57.344 INFO Fetch successful Dec 16 14:13:57.344599 coreos-metadata[1715]: Dec 16 14:13:57.344 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 16 14:13:57.377735 coreos-metadata[1715]: Dec 16 14:13:57.377 INFO Fetch successful Dec 16 14:13:57.380765 unknown[1715]: wrote ssh authorized keys file for user: core Dec 16 14:13:57.417744 update-ssh-keys[1862]: Updated "/home/core/.ssh/authorized_keys" Dec 16 14:13:57.419949 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 14:13:57.422108 systemd[1]: Finished sshkeys.service. Dec 16 14:13:57.425271 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 14:13:57.431136 systemd[1]: Startup finished in 3.351s (kernel) + 14.770s (initrd) + 11.920s (userspace) = 30.043s. Dec 16 14:14:02.498315 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 14:14:02.501171 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 14:14:02.690025 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 14:14:02.698693 (kubelet)[1874]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 14:14:02.789957 kubelet[1874]: E1216 14:14:02.789791 1874 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 14:14:02.794580 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 14:14:02.794813 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 14:14:02.795691 systemd[1]: kubelet.service: Consumed 209ms CPU time, 111.3M memory peak. Dec 16 14:14:04.921593 systemd[1]: Started sshd@3-10.230.52.194:22-139.178.89.65:33918.service - OpenSSH per-connection server daemon (139.178.89.65:33918). Dec 16 14:14:05.710387 sshd[1881]: Accepted publickey for core from 139.178.89.65 port 33918 ssh2: RSA SHA256:HFgv0LODQ11hKhkpz13ZhYSt64haPZ/HUbDL52pAA0k Dec 16 14:14:05.712085 sshd-session[1881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:14:05.720371 systemd-logind[1646]: New session 6 of user core. Dec 16 14:14:05.728335 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 14:14:06.152962 sshd[1884]: Connection closed by 139.178.89.65 port 33918 Dec 16 14:14:06.153817 sshd-session[1881]: pam_unix(sshd:session): session closed for user core Dec 16 14:14:06.158225 systemd[1]: sshd@3-10.230.52.194:22-139.178.89.65:33918.service: Deactivated successfully. Dec 16 14:14:06.160914 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 14:14:06.162678 systemd-logind[1646]: Session 6 logged out. Waiting for processes to exit. Dec 16 14:14:06.164427 systemd-logind[1646]: Removed session 6. Dec 16 14:14:06.314515 systemd[1]: Started sshd@4-10.230.52.194:22-139.178.89.65:33920.service - OpenSSH per-connection server daemon (139.178.89.65:33920). Dec 16 14:14:07.100974 sshd[1890]: Accepted publickey for core from 139.178.89.65 port 33920 ssh2: RSA SHA256:HFgv0LODQ11hKhkpz13ZhYSt64haPZ/HUbDL52pAA0k Dec 16 14:14:07.102715 sshd-session[1890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:14:07.111135 systemd-logind[1646]: New session 7 of user core. Dec 16 14:14:07.122328 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 14:14:07.538288 sshd[1893]: Connection closed by 139.178.89.65 port 33920 Dec 16 14:14:07.539201 sshd-session[1890]: pam_unix(sshd:session): session closed for user core Dec 16 14:14:07.544484 systemd[1]: sshd@4-10.230.52.194:22-139.178.89.65:33920.service: Deactivated successfully. Dec 16 14:14:07.547402 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 14:14:07.549619 systemd-logind[1646]: Session 7 logged out. Waiting for processes to exit. Dec 16 14:14:07.551195 systemd-logind[1646]: Removed session 7. Dec 16 14:14:07.699890 systemd[1]: Started sshd@5-10.230.52.194:22-139.178.89.65:33928.service - OpenSSH per-connection server daemon (139.178.89.65:33928). Dec 16 14:14:08.480193 sshd[1899]: Accepted publickey for core from 139.178.89.65 port 33928 ssh2: RSA SHA256:HFgv0LODQ11hKhkpz13ZhYSt64haPZ/HUbDL52pAA0k Dec 16 14:14:08.481778 sshd-session[1899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:14:08.488693 systemd-logind[1646]: New session 8 of user core. Dec 16 14:14:08.497299 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 14:14:08.925566 sshd[1902]: Connection closed by 139.178.89.65 port 33928 Dec 16 14:14:08.926485 sshd-session[1899]: pam_unix(sshd:session): session closed for user core Dec 16 14:14:08.932807 systemd[1]: sshd@5-10.230.52.194:22-139.178.89.65:33928.service: Deactivated successfully. Dec 16 14:14:08.935313 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 14:14:08.937240 systemd-logind[1646]: Session 8 logged out. Waiting for processes to exit. Dec 16 14:14:08.938917 systemd-logind[1646]: Removed session 8. Dec 16 14:14:09.082179 systemd[1]: Started sshd@6-10.230.52.194:22-139.178.89.65:33930.service - OpenSSH per-connection server daemon (139.178.89.65:33930). Dec 16 14:14:09.862001 sshd[1908]: Accepted publickey for core from 139.178.89.65 port 33930 ssh2: RSA SHA256:HFgv0LODQ11hKhkpz13ZhYSt64haPZ/HUbDL52pAA0k Dec 16 14:14:09.864023 sshd-session[1908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:14:09.871414 systemd-logind[1646]: New session 9 of user core. Dec 16 14:14:09.880331 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 14:14:10.176615 sudo[1912]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 14:14:10.177948 sudo[1912]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 14:14:10.196352 sudo[1912]: pam_unix(sudo:session): session closed for user root Dec 16 14:14:10.344041 sshd[1911]: Connection closed by 139.178.89.65 port 33930 Dec 16 14:14:10.343087 sshd-session[1908]: pam_unix(sshd:session): session closed for user core Dec 16 14:14:10.348395 systemd[1]: sshd@6-10.230.52.194:22-139.178.89.65:33930.service: Deactivated successfully. Dec 16 14:14:10.350901 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 14:14:10.354479 systemd-logind[1646]: Session 9 logged out. Waiting for processes to exit. Dec 16 14:14:10.356125 systemd-logind[1646]: Removed session 9. Dec 16 14:14:10.507779 systemd[1]: Started sshd@7-10.230.52.194:22-139.178.89.65:35886.service - OpenSSH per-connection server daemon (139.178.89.65:35886). Dec 16 14:14:11.292584 sshd[1918]: Accepted publickey for core from 139.178.89.65 port 35886 ssh2: RSA SHA256:HFgv0LODQ11hKhkpz13ZhYSt64haPZ/HUbDL52pAA0k Dec 16 14:14:11.294549 sshd-session[1918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:14:11.303695 systemd-logind[1646]: New session 10 of user core. Dec 16 14:14:11.310271 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 14:14:11.596789 sudo[1923]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 14:14:11.597479 sudo[1923]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 14:14:11.606056 sudo[1923]: pam_unix(sudo:session): session closed for user root Dec 16 14:14:11.617355 sudo[1922]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 14:14:11.617843 sudo[1922]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 14:14:11.636652 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 14:14:11.695000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 14:14:11.699412 kernel: kauditd_printk_skb: 135 callbacks suppressed Dec 16 14:14:11.699519 kernel: audit: type=1305 audit(1765894451.695:232): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 14:14:11.699561 augenrules[1945]: No rules Dec 16 14:14:11.706466 kernel: audit: type=1300 audit(1765894451.695:232): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc95b1c8e0 a2=420 a3=0 items=0 ppid=1926 pid=1945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:11.695000 audit[1945]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc95b1c8e0 a2=420 a3=0 items=0 ppid=1926 pid=1945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:11.701698 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 14:14:11.702676 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 14:14:11.707317 sudo[1922]: pam_unix(sudo:session): session closed for user root Dec 16 14:14:11.695000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 14:14:11.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:14:11.711988 kernel: audit: type=1327 audit(1765894451.695:232): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 14:14:11.712077 kernel: audit: type=1130 audit(1765894451.705:233): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:14:11.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:14:11.715870 kernel: audit: type=1131 audit(1765894451.705:234): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:14:11.707000 audit[1922]: USER_END pid=1922 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 14:14:11.719818 kernel: audit: type=1106 audit(1765894451.707:235): pid=1922 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 14:14:11.707000 audit[1922]: CRED_DISP pid=1922 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 14:14:11.724046 kernel: audit: type=1104 audit(1765894451.707:236): pid=1922 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 14:14:11.854837 sshd[1921]: Connection closed by 139.178.89.65 port 35886 Dec 16 14:14:11.855455 sshd-session[1918]: pam_unix(sshd:session): session closed for user core Dec 16 14:14:11.857000 audit[1918]: USER_END pid=1918 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:14:11.863056 kernel: audit: type=1106 audit(1765894451.857:237): pid=1918 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:14:11.863631 systemd[1]: sshd@7-10.230.52.194:22-139.178.89.65:35886.service: Deactivated successfully. Dec 16 14:14:11.857000 audit[1918]: CRED_DISP pid=1918 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:14:11.866311 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 14:14:11.869175 kernel: audit: type=1104 audit(1765894451.857:238): pid=1918 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:14:11.869251 systemd-logind[1646]: Session 10 logged out. Waiting for processes to exit. Dec 16 14:14:11.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.230.52.194:22-139.178.89.65:35886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:14:11.872320 systemd-logind[1646]: Removed session 10. Dec 16 14:14:11.875113 kernel: audit: type=1131 audit(1765894451.863:239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.230.52.194:22-139.178.89.65:35886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:14:12.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.52.194:22-139.178.89.65:35900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:14:12.016683 systemd[1]: Started sshd@8-10.230.52.194:22-139.178.89.65:35900.service - OpenSSH per-connection server daemon (139.178.89.65:35900). Dec 16 14:14:12.801000 audit[1954]: USER_ACCT pid=1954 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:14:12.801623 sshd[1954]: Accepted publickey for core from 139.178.89.65 port 35900 ssh2: RSA SHA256:HFgv0LODQ11hKhkpz13ZhYSt64haPZ/HUbDL52pAA0k Dec 16 14:14:12.802000 audit[1954]: CRED_ACQ pid=1954 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:14:12.802000 audit[1954]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca9adf460 a2=3 a3=0 items=0 ppid=1 pid=1954 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:12.802000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:14:12.803454 sshd-session[1954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:14:12.805065 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 14:14:12.809107 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 14:14:12.814289 systemd-logind[1646]: New session 11 of user core. Dec 16 14:14:12.819477 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 14:14:12.827000 audit[1954]: USER_START pid=1954 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:14:12.830000 audit[1960]: CRED_ACQ pid=1960 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:14:13.027962 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 14:14:13.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:14:13.040502 (kubelet)[1966]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 14:14:13.104000 audit[1972]: USER_ACCT pid=1972 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 14:14:13.105563 sudo[1972]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 14:14:13.106000 audit[1972]: CRED_REFR pid=1972 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 14:14:13.106528 sudo[1972]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 14:14:13.110000 audit[1972]: USER_START pid=1972 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 14:14:13.116030 kubelet[1966]: E1216 14:14:13.115917 1966 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 14:14:13.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 14:14:13.120578 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 14:14:13.120837 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 14:14:13.123076 systemd[1]: kubelet.service: Consumed 253ms CPU time, 109.1M memory peak. Dec 16 14:14:13.636163 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 14:14:13.658804 (dockerd)[1990]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 14:14:14.033711 dockerd[1990]: time="2025-12-16T14:14:14.033594530Z" level=info msg="Starting up" Dec 16 14:14:14.035549 dockerd[1990]: time="2025-12-16T14:14:14.035513248Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 14:14:14.056244 dockerd[1990]: time="2025-12-16T14:14:14.056173374Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 14:14:14.115848 dockerd[1990]: time="2025-12-16T14:14:14.115774959Z" level=info msg="Loading containers: start." Dec 16 14:14:14.132045 kernel: Initializing XFRM netlink socket Dec 16 14:14:14.214000 audit[2041]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:14.214000 audit[2041]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe58df3c90 a2=0 a3=0 items=0 ppid=1990 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.214000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 14:14:14.218000 audit[2043]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:14.218000 audit[2043]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffcd99023c0 a2=0 a3=0 items=0 ppid=1990 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.218000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 14:14:14.221000 audit[2045]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:14.221000 audit[2045]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffccce82e60 a2=0 a3=0 items=0 ppid=1990 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.221000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 14:14:14.224000 audit[2047]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:14.224000 audit[2047]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcaaba1380 a2=0 a3=0 items=0 ppid=1990 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.224000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 14:14:14.227000 audit[2049]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:14.227000 audit[2049]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc3eabc750 a2=0 a3=0 items=0 ppid=1990 pid=2049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.227000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 14:14:14.230000 audit[2051]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:14.230000 audit[2051]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc354cd600 a2=0 a3=0 items=0 ppid=1990 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.230000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 14:14:14.233000 audit[2053]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:14.233000 audit[2053]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcf418e090 a2=0 a3=0 items=0 ppid=1990 pid=2053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.233000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 14:14:14.236000 audit[2055]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2055 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:14.236000 audit[2055]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff2d3a0a60 a2=0 a3=0 items=0 ppid=1990 pid=2055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.236000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 14:14:14.285000 audit[2058]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:14.285000 audit[2058]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffd5daf3850 a2=0 a3=0 items=0 ppid=1990 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.285000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 14:14:14.289000 audit[2060]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:14.289000 audit[2060]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff788caad0 a2=0 a3=0 items=0 ppid=1990 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.289000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 14:14:14.292000 audit[2062]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:14.292000 audit[2062]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffdee410730 a2=0 a3=0 items=0 ppid=1990 pid=2062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.292000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 14:14:14.295000 audit[2064]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2064 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:14.295000 audit[2064]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffce542a9b0 a2=0 a3=0 items=0 ppid=1990 pid=2064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.295000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 14:14:14.298000 audit[2066]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2066 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:14.298000 audit[2066]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe7ee2ec50 a2=0 a3=0 items=0 ppid=1990 pid=2066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.298000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 14:14:14.352000 audit[2096]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2096 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:14.352000 audit[2096]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc3e60fc70 a2=0 a3=0 items=0 ppid=1990 pid=2096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.352000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 14:14:14.355000 audit[2098]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2098 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:14.355000 audit[2098]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff4eff8770 a2=0 a3=0 items=0 ppid=1990 pid=2098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.355000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 14:14:14.358000 audit[2100]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2100 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:14.358000 audit[2100]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc5e2d2d10 a2=0 a3=0 items=0 ppid=1990 pid=2100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.358000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 14:14:14.361000 audit[2102]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2102 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:14.361000 audit[2102]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc87d119b0 a2=0 a3=0 items=0 ppid=1990 pid=2102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.361000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 14:14:14.364000 audit[2104]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2104 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:14.364000 audit[2104]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe2d32f420 a2=0 a3=0 items=0 ppid=1990 pid=2104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.364000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 14:14:14.367000 audit[2106]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2106 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:14.367000 audit[2106]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc32359b40 a2=0 a3=0 items=0 ppid=1990 pid=2106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.367000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 14:14:14.370000 audit[2108]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2108 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:14.370000 audit[2108]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdf68edb00 a2=0 a3=0 items=0 ppid=1990 pid=2108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.370000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 14:14:14.373000 audit[2110]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2110 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:14.373000 audit[2110]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffd9594e3e0 a2=0 a3=0 items=0 ppid=1990 pid=2110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.373000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 14:14:14.376000 audit[2112]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2112 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:14.376000 audit[2112]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7fff19ac3710 a2=0 a3=0 items=0 ppid=1990 pid=2112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.376000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 14:14:14.379000 audit[2114]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2114 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:14.379000 audit[2114]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffec7246ce0 a2=0 a3=0 items=0 ppid=1990 pid=2114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.379000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 14:14:14.382000 audit[2116]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2116 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:14.382000 audit[2116]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fffc7c51a20 a2=0 a3=0 items=0 ppid=1990 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.382000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 14:14:14.386000 audit[2118]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2118 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:14.386000 audit[2118]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffff1490160 a2=0 a3=0 items=0 ppid=1990 pid=2118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.386000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 14:14:14.389000 audit[2120]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2120 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:14.389000 audit[2120]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffc2fac4a80 a2=0 a3=0 items=0 ppid=1990 pid=2120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.389000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 14:14:14.396000 audit[2125]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2125 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:14.396000 audit[2125]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffa0d08e60 a2=0 a3=0 items=0 ppid=1990 pid=2125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.396000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 14:14:14.400000 audit[2127]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2127 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:14.400000 audit[2127]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fffe2c02120 a2=0 a3=0 items=0 ppid=1990 pid=2127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.400000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 14:14:14.402000 audit[2129]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2129 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:14.402000 audit[2129]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc142e3770 a2=0 a3=0 items=0 ppid=1990 pid=2129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.402000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 14:14:14.405000 audit[2131]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2131 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:14.405000 audit[2131]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff27900150 a2=0 a3=0 items=0 ppid=1990 pid=2131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.405000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 14:14:14.409000 audit[2133]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2133 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:14.409000 audit[2133]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fffe4029eb0 a2=0 a3=0 items=0 ppid=1990 pid=2133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.409000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 14:14:14.412000 audit[2135]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2135 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:14.412000 audit[2135]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff54685260 a2=0 a3=0 items=0 ppid=1990 pid=2135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.412000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 14:14:14.422858 systemd-timesyncd[1578]: Network configuration changed, trying to establish connection. Dec 16 14:14:14.429000 audit[2139]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2139 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:14.429000 audit[2139]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffd8b394810 a2=0 a3=0 items=0 ppid=1990 pid=2139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.429000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 14:14:14.434000 audit[2141]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2141 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:14.434000 audit[2141]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffdba9fa500 a2=0 a3=0 items=0 ppid=1990 pid=2141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.434000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 14:14:14.449000 audit[2149]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2149 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:14.449000 audit[2149]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7fff83ab7250 a2=0 a3=0 items=0 ppid=1990 pid=2149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.449000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 14:14:14.462000 audit[2155]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2155 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:14.462000 audit[2155]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff8b57c2e0 a2=0 a3=0 items=0 ppid=1990 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.462000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 14:14:14.466000 audit[2157]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2157 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:14.466000 audit[2157]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7fffeeb8e000 a2=0 a3=0 items=0 ppid=1990 pid=2157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.466000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 14:14:14.469000 audit[2159]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2159 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:14.469000 audit[2159]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd7f9ce940 a2=0 a3=0 items=0 ppid=1990 pid=2159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.469000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 14:14:14.473000 audit[2161]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2161 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:14.473000 audit[2161]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffef10e3f30 a2=0 a3=0 items=0 ppid=1990 pid=2161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.473000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 14:14:14.476000 audit[2163]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2163 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:14.476000 audit[2163]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcaee7f580 a2=0 a3=0 items=0 ppid=1990 pid=2163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:14.476000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 14:14:14.477114 systemd-networkd[1572]: docker0: Link UP Dec 16 14:14:14.481378 dockerd[1990]: time="2025-12-16T14:14:14.481320037Z" level=info msg="Loading containers: done." Dec 16 14:14:14.500589 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1625944924-merged.mount: Deactivated successfully. Dec 16 14:14:14.506493 dockerd[1990]: time="2025-12-16T14:14:14.506420871Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 14:14:14.506647 dockerd[1990]: time="2025-12-16T14:14:14.506587849Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 14:14:14.506765 dockerd[1990]: time="2025-12-16T14:14:14.506730994Z" level=info msg="Initializing buildkit" Dec 16 14:14:14.533737 dockerd[1990]: time="2025-12-16T14:14:14.533670648Z" level=info msg="Completed buildkit initialization" Dec 16 14:14:14.543625 dockerd[1990]: time="2025-12-16T14:14:14.543419207Z" level=info msg="Daemon has completed initialization" Dec 16 14:14:14.543980 dockerd[1990]: time="2025-12-16T14:14:14.543807689Z" level=info msg="API listen on /run/docker.sock" Dec 16 14:14:14.544439 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 14:14:14.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:14:15.650823 systemd-timesyncd[1578]: Server has too large root distance. Disconnecting. Dec 16 14:14:16.352927 systemd-resolved[1328]: Clock change detected. Flushing caches. Dec 16 14:14:16.353707 systemd-timesyncd[1578]: Contacted time server [2a0f:85c0::50]:123 (2.flatcar.pool.ntp.org). Dec 16 14:14:16.353803 systemd-timesyncd[1578]: Initial clock synchronization to Tue 2025-12-16 14:14:16.352583 UTC. Dec 16 14:14:16.433287 containerd[1683]: time="2025-12-16T14:14:16.433112684Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 16 14:14:17.755141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3177924459.mount: Deactivated successfully. Dec 16 14:14:20.707218 containerd[1683]: time="2025-12-16T14:14:20.706817966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:14:20.708282 containerd[1683]: time="2025-12-16T14:14:20.708249596Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=28990042" Dec 16 14:14:20.710227 containerd[1683]: time="2025-12-16T14:14:20.708890518Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:14:20.712263 containerd[1683]: time="2025-12-16T14:14:20.712229474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:14:20.713831 containerd[1683]: time="2025-12-16T14:14:20.713792317Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 4.28054554s" Dec 16 14:14:20.713905 containerd[1683]: time="2025-12-16T14:14:20.713849675Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Dec 16 14:14:20.714863 containerd[1683]: time="2025-12-16T14:14:20.714832961Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 16 14:14:22.472269 kernel: kauditd_printk_skb: 134 callbacks suppressed Dec 16 14:14:22.472656 kernel: audit: type=1131 audit(1765894462.464:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:14:22.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:14:22.465100 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 16 14:14:22.484000 audit: BPF prog-id=63 op=UNLOAD Dec 16 14:14:22.487432 kernel: audit: type=1334 audit(1765894462.484:293): prog-id=63 op=UNLOAD Dec 16 14:14:23.317219 containerd[1683]: time="2025-12-16T14:14:23.316416822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:14:23.318994 containerd[1683]: time="2025-12-16T14:14:23.318936095Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Dec 16 14:14:23.320870 containerd[1683]: time="2025-12-16T14:14:23.320790942Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:14:23.325896 containerd[1683]: time="2025-12-16T14:14:23.325821375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:14:23.328405 containerd[1683]: time="2025-12-16T14:14:23.328315460Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 2.613351602s" Dec 16 14:14:23.328405 containerd[1683]: time="2025-12-16T14:14:23.328369309Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Dec 16 14:14:23.329334 containerd[1683]: time="2025-12-16T14:14:23.329301452Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 16 14:14:23.942142 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 14:14:23.946302 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 14:14:24.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:14:24.498075 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 14:14:24.506209 kernel: audit: type=1130 audit(1765894464.497:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:14:24.515631 (kubelet)[2276]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 14:14:24.590812 kubelet[2276]: E1216 14:14:24.590671 2276 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 14:14:24.593751 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 14:14:24.593989 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 14:14:24.599200 kernel: audit: type=1131 audit(1765894464.593:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 14:14:24.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 14:14:24.594658 systemd[1]: kubelet.service: Consumed 231ms CPU time, 108.7M memory peak. Dec 16 14:14:25.506655 containerd[1683]: time="2025-12-16T14:14:25.506556907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:14:25.508140 containerd[1683]: time="2025-12-16T14:14:25.508081558Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Dec 16 14:14:25.509653 containerd[1683]: time="2025-12-16T14:14:25.508587540Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:14:25.512805 containerd[1683]: time="2025-12-16T14:14:25.512743919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:14:25.514655 containerd[1683]: time="2025-12-16T14:14:25.514616511Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 2.185266714s" Dec 16 14:14:25.514842 containerd[1683]: time="2025-12-16T14:14:25.514813428Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Dec 16 14:14:25.516429 containerd[1683]: time="2025-12-16T14:14:25.516385855Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 16 14:14:27.324703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount605006080.mount: Deactivated successfully. Dec 16 14:14:28.327015 containerd[1683]: time="2025-12-16T14:14:28.326953533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:14:28.329243 containerd[1683]: time="2025-12-16T14:14:28.329215526Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31926374" Dec 16 14:14:28.330875 containerd[1683]: time="2025-12-16T14:14:28.330841892Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:14:28.334886 containerd[1683]: time="2025-12-16T14:14:28.334849729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:14:28.336908 containerd[1683]: time="2025-12-16T14:14:28.336875004Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 2.820440177s" Dec 16 14:14:28.337053 containerd[1683]: time="2025-12-16T14:14:28.337028787Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Dec 16 14:14:28.337828 containerd[1683]: time="2025-12-16T14:14:28.337802809Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 16 14:14:29.166881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1331527724.mount: Deactivated successfully. Dec 16 14:14:31.495224 containerd[1683]: time="2025-12-16T14:14:31.494464507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:14:31.497258 containerd[1683]: time="2025-12-16T14:14:31.497215612Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20128467" Dec 16 14:14:31.499802 containerd[1683]: time="2025-12-16T14:14:31.498667105Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:14:31.503925 containerd[1683]: time="2025-12-16T14:14:31.503842992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:14:31.506884 containerd[1683]: time="2025-12-16T14:14:31.506626090Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 3.168632217s" Dec 16 14:14:31.506884 containerd[1683]: time="2025-12-16T14:14:31.506708202Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Dec 16 14:14:31.507708 containerd[1683]: time="2025-12-16T14:14:31.507653533Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 14:14:32.285481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1335430010.mount: Deactivated successfully. Dec 16 14:14:32.291480 containerd[1683]: time="2025-12-16T14:14:32.291425380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 14:14:32.293438 containerd[1683]: time="2025-12-16T14:14:32.293394872Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 14:14:32.294081 containerd[1683]: time="2025-12-16T14:14:32.294032382Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 14:14:32.298150 containerd[1683]: time="2025-12-16T14:14:32.298055269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 14:14:32.302720 containerd[1683]: time="2025-12-16T14:14:32.302682186Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 794.983951ms" Dec 16 14:14:32.303044 containerd[1683]: time="2025-12-16T14:14:32.302837908Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 14:14:32.304665 containerd[1683]: time="2025-12-16T14:14:32.304635593Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 16 14:14:32.950614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2177009211.mount: Deactivated successfully. Dec 16 14:14:34.692298 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 16 14:14:34.697242 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 14:14:34.928908 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 14:14:34.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:14:34.936265 kernel: audit: type=1130 audit(1765894474.927:296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:14:34.949049 (kubelet)[2413]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 14:14:35.074916 kubelet[2413]: E1216 14:14:35.074826 2413 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 14:14:35.081946 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 14:14:35.082233 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 14:14:35.089205 kernel: audit: type=1131 audit(1765894475.082:297): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 14:14:35.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 14:14:35.088718 systemd[1]: kubelet.service: Consumed 265ms CPU time, 110.4M memory peak. Dec 16 14:14:36.086364 update_engine[1648]: I20251216 14:14:36.086130 1648 update_attempter.cc:509] Updating boot flags... Dec 16 14:14:36.731810 containerd[1683]: time="2025-12-16T14:14:36.731725460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:14:36.733588 containerd[1683]: time="2025-12-16T14:14:36.733208216Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=56977083" Dec 16 14:14:36.734298 containerd[1683]: time="2025-12-16T14:14:36.734262376Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:14:36.737863 containerd[1683]: time="2025-12-16T14:14:36.737824096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:14:36.739568 containerd[1683]: time="2025-12-16T14:14:36.739529749Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.434647895s" Dec 16 14:14:36.739630 containerd[1683]: time="2025-12-16T14:14:36.739574744Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Dec 16 14:14:41.789879 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 14:14:41.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:14:41.790173 systemd[1]: kubelet.service: Consumed 265ms CPU time, 110.4M memory peak. Dec 16 14:14:41.796212 kernel: audit: type=1130 audit(1765894481.788:298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:14:41.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:14:41.800495 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 14:14:41.801213 kernel: audit: type=1131 audit(1765894481.788:299): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:14:41.840014 systemd[1]: Reload requested from client PID 2467 ('systemctl') (unit session-11.scope)... Dec 16 14:14:41.840081 systemd[1]: Reloading... Dec 16 14:14:42.012256 zram_generator::config[2521]: No configuration found. Dec 16 14:14:42.355437 systemd[1]: Reloading finished in 514 ms. Dec 16 14:14:42.403000 audit: BPF prog-id=67 op=LOAD Dec 16 14:14:42.408457 kernel: audit: type=1334 audit(1765894482.403:300): prog-id=67 op=LOAD Dec 16 14:14:42.408553 kernel: audit: type=1334 audit(1765894482.403:301): prog-id=68 op=LOAD Dec 16 14:14:42.403000 audit: BPF prog-id=68 op=LOAD Dec 16 14:14:42.410013 kernel: audit: type=1334 audit(1765894482.403:302): prog-id=53 op=UNLOAD Dec 16 14:14:42.403000 audit: BPF prog-id=53 op=UNLOAD Dec 16 14:14:42.403000 audit: BPF prog-id=54 op=UNLOAD Dec 16 14:14:42.412916 kernel: audit: type=1334 audit(1765894482.403:303): prog-id=54 op=UNLOAD Dec 16 14:14:42.412982 kernel: audit: type=1334 audit(1765894482.407:304): prog-id=69 op=LOAD Dec 16 14:14:42.407000 audit: BPF prog-id=69 op=LOAD Dec 16 14:14:42.414501 kernel: audit: type=1334 audit(1765894482.407:305): prog-id=58 op=UNLOAD Dec 16 14:14:42.407000 audit: BPF prog-id=58 op=UNLOAD Dec 16 14:14:42.415940 kernel: audit: type=1334 audit(1765894482.412:306): prog-id=70 op=LOAD Dec 16 14:14:42.412000 audit: BPF prog-id=70 op=LOAD Dec 16 14:14:42.417446 kernel: audit: type=1334 audit(1765894482.412:307): prog-id=60 op=UNLOAD Dec 16 14:14:42.412000 audit: BPF prog-id=60 op=UNLOAD Dec 16 14:14:42.412000 audit: BPF prog-id=71 op=LOAD Dec 16 14:14:42.412000 audit: BPF prog-id=72 op=LOAD Dec 16 14:14:42.412000 audit: BPF prog-id=61 op=UNLOAD Dec 16 14:14:42.412000 audit: BPF prog-id=62 op=UNLOAD Dec 16 14:14:42.413000 audit: BPF prog-id=73 op=LOAD Dec 16 14:14:42.413000 audit: BPF prog-id=49 op=UNLOAD Dec 16 14:14:42.415000 audit: BPF prog-id=74 op=LOAD Dec 16 14:14:42.415000 audit: BPF prog-id=55 op=UNLOAD Dec 16 14:14:42.415000 audit: BPF prog-id=75 op=LOAD Dec 16 14:14:42.415000 audit: BPF prog-id=76 op=LOAD Dec 16 14:14:42.415000 audit: BPF prog-id=56 op=UNLOAD Dec 16 14:14:42.415000 audit: BPF prog-id=57 op=UNLOAD Dec 16 14:14:42.418000 audit: BPF prog-id=77 op=LOAD Dec 16 14:14:42.418000 audit: BPF prog-id=50 op=UNLOAD Dec 16 14:14:42.418000 audit: BPF prog-id=78 op=LOAD Dec 16 14:14:42.418000 audit: BPF prog-id=79 op=LOAD Dec 16 14:14:42.418000 audit: BPF prog-id=51 op=UNLOAD Dec 16 14:14:42.418000 audit: BPF prog-id=52 op=UNLOAD Dec 16 14:14:42.439000 audit: BPF prog-id=80 op=LOAD Dec 16 14:14:42.439000 audit: BPF prog-id=59 op=UNLOAD Dec 16 14:14:42.439000 audit: BPF prog-id=81 op=LOAD Dec 16 14:14:42.439000 audit: BPF prog-id=66 op=UNLOAD Dec 16 14:14:42.440000 audit: BPF prog-id=82 op=LOAD Dec 16 14:14:42.440000 audit: BPF prog-id=46 op=UNLOAD Dec 16 14:14:42.440000 audit: BPF prog-id=83 op=LOAD Dec 16 14:14:42.440000 audit: BPF prog-id=84 op=LOAD Dec 16 14:14:42.440000 audit: BPF prog-id=47 op=UNLOAD Dec 16 14:14:42.440000 audit: BPF prog-id=48 op=UNLOAD Dec 16 14:14:42.442000 audit: BPF prog-id=85 op=LOAD Dec 16 14:14:42.442000 audit: BPF prog-id=43 op=UNLOAD Dec 16 14:14:42.442000 audit: BPF prog-id=86 op=LOAD Dec 16 14:14:42.442000 audit: BPF prog-id=87 op=LOAD Dec 16 14:14:42.442000 audit: BPF prog-id=44 op=UNLOAD Dec 16 14:14:42.442000 audit: BPF prog-id=45 op=UNLOAD Dec 16 14:14:42.471147 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 14:14:42.471325 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 14:14:42.471924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 14:14:42.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 14:14:42.472046 systemd[1]: kubelet.service: Consumed 149ms CPU time, 98.2M memory peak. Dec 16 14:14:42.474719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 14:14:42.676928 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 14:14:42.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:14:42.687673 (kubelet)[2582]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 14:14:42.741073 kubelet[2582]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 14:14:42.741073 kubelet[2582]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 14:14:42.741073 kubelet[2582]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 14:14:42.741959 kubelet[2582]: I1216 14:14:42.741172 2582 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 14:14:43.689708 kubelet[2582]: I1216 14:14:43.689601 2582 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 14:14:43.689708 kubelet[2582]: I1216 14:14:43.689657 2582 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 14:14:43.690019 kubelet[2582]: I1216 14:14:43.689967 2582 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 14:14:43.755804 kubelet[2582]: I1216 14:14:43.755735 2582 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 14:14:43.757937 kubelet[2582]: E1216 14:14:43.757061 2582 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.52.194:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.52.194:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 14:14:43.778579 kubelet[2582]: I1216 14:14:43.778483 2582 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 14:14:43.794805 kubelet[2582]: I1216 14:14:43.794746 2582 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 14:14:43.798686 kubelet[2582]: I1216 14:14:43.798596 2582 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 14:14:43.801808 kubelet[2582]: I1216 14:14:43.798784 2582 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-6slrx.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 14:14:43.802216 kubelet[2582]: I1216 14:14:43.802193 2582 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 14:14:43.802713 kubelet[2582]: I1216 14:14:43.802376 2582 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 14:14:43.804068 kubelet[2582]: I1216 14:14:43.804042 2582 state_mem.go:36] "Initialized new in-memory state store" Dec 16 14:14:43.808760 kubelet[2582]: I1216 14:14:43.808719 2582 kubelet.go:480] "Attempting to sync node with API server" Dec 16 14:14:43.808925 kubelet[2582]: I1216 14:14:43.808904 2582 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 14:14:43.809068 kubelet[2582]: I1216 14:14:43.809049 2582 kubelet.go:386] "Adding apiserver pod source" Dec 16 14:14:43.811120 kubelet[2582]: I1216 14:14:43.810969 2582 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 14:14:43.821046 kubelet[2582]: E1216 14:14:43.820970 2582 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.52.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-6slrx.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.52.194:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 14:14:43.821624 kubelet[2582]: I1216 14:14:43.821196 2582 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 14:14:43.823324 kubelet[2582]: I1216 14:14:43.823144 2582 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 14:14:43.836589 kubelet[2582]: W1216 14:14:43.836106 2582 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 14:14:43.839511 kubelet[2582]: E1216 14:14:43.839467 2582 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.52.194:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.52.194:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 14:14:43.844536 kubelet[2582]: I1216 14:14:43.844484 2582 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 14:14:43.844648 kubelet[2582]: I1216 14:14:43.844628 2582 server.go:1289] "Started kubelet" Dec 16 14:14:43.846655 kubelet[2582]: I1216 14:14:43.846464 2582 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 14:14:43.848753 kubelet[2582]: I1216 14:14:43.848434 2582 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 14:14:43.849848 kubelet[2582]: I1216 14:14:43.849052 2582 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 14:14:43.851617 kubelet[2582]: I1216 14:14:43.851592 2582 server.go:317] "Adding debug handlers to kubelet server" Dec 16 14:14:43.859653 kubelet[2582]: E1216 14:14:43.855683 2582 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.52.194:6443/api/v1/namespaces/default/events\": dial tcp 10.230.52.194:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-6slrx.gb1.brightbox.com.1881b7ad4c60ac7c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-6slrx.gb1.brightbox.com,UID:srv-6slrx.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-6slrx.gb1.brightbox.com,},FirstTimestamp:2025-12-16 14:14:43.844574332 +0000 UTC m=+1.151839282,LastTimestamp:2025-12-16 14:14:43.844574332 +0000 UTC m=+1.151839282,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-6slrx.gb1.brightbox.com,}" Dec 16 14:14:43.862532 kubelet[2582]: I1216 14:14:43.862491 2582 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 14:14:43.864379 kubelet[2582]: I1216 14:14:43.863792 2582 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 14:14:43.888242 kubelet[2582]: I1216 14:14:43.886102 2582 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 14:14:43.888242 kubelet[2582]: I1216 14:14:43.886306 2582 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 14:14:43.888242 kubelet[2582]: I1216 14:14:43.886444 2582 reconciler.go:26] "Reconciler: start to sync state" Dec 16 14:14:43.888242 kubelet[2582]: E1216 14:14:43.887010 2582 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.52.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.52.194:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 14:14:43.891210 kubelet[2582]: E1216 14:14:43.890471 2582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-6slrx.gb1.brightbox.com\" not found" Dec 16 14:14:43.891430 kubelet[2582]: E1216 14:14:43.891391 2582 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.52.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-6slrx.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.52.194:6443: connect: connection refused" interval="200ms" Dec 16 14:14:43.894756 kubelet[2582]: I1216 14:14:43.894498 2582 factory.go:223] Registration of the systemd container factory successfully Dec 16 14:14:43.894000 audit[2598]: NETFILTER_CFG table=mangle:42 family=10 entries=2 op=nft_register_chain pid=2598 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:43.894000 audit[2598]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe0c17c7d0 a2=0 a3=0 items=0 ppid=2582 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:43.894000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 14:14:43.896422 kubelet[2582]: I1216 14:14:43.896372 2582 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 14:14:43.898281 kubelet[2582]: I1216 14:14:43.898243 2582 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 14:14:43.898000 audit[2599]: NETFILTER_CFG table=mangle:43 family=2 entries=2 op=nft_register_chain pid=2599 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:43.898000 audit[2599]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc081217b0 a2=0 a3=0 items=0 ppid=2582 pid=2599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:43.898000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 14:14:43.899000 audit[2600]: NETFILTER_CFG table=mangle:44 family=10 entries=1 op=nft_register_chain pid=2600 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:43.899000 audit[2600]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcbe3944f0 a2=0 a3=0 items=0 ppid=2582 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:43.899000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 14:14:43.902807 kubelet[2582]: E1216 14:14:43.902739 2582 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 14:14:43.903761 kubelet[2582]: I1216 14:14:43.903689 2582 factory.go:223] Registration of the containerd container factory successfully Dec 16 14:14:43.903000 audit[2601]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_chain pid=2601 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:43.903000 audit[2601]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe2e2286d0 a2=0 a3=0 items=0 ppid=2582 pid=2601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:43.903000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 14:14:43.905000 audit[2602]: NETFILTER_CFG table=nat:46 family=10 entries=1 op=nft_register_chain pid=2602 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:43.905000 audit[2602]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe9bc64ae0 a2=0 a3=0 items=0 ppid=2582 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:43.905000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 14:14:43.915000 audit[2605]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=2605 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:43.915000 audit[2605]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe376c3540 a2=0 a3=0 items=0 ppid=2582 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:43.915000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 14:14:43.920000 audit[2608]: NETFILTER_CFG table=filter:48 family=10 entries=1 op=nft_register_chain pid=2608 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:43.920000 audit[2608]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff26920130 a2=0 a3=0 items=0 ppid=2582 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:43.920000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 14:14:43.923000 audit[2609]: NETFILTER_CFG table=filter:49 family=2 entries=2 op=nft_register_chain pid=2609 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:43.923000 audit[2609]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff3ab8dcd0 a2=0 a3=0 items=0 ppid=2582 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:43.923000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 14:14:43.946000 audit[2615]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2615 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:43.948622 kubelet[2582]: I1216 14:14:43.948585 2582 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 14:14:43.948753 kubelet[2582]: I1216 14:14:43.948734 2582 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 14:14:43.948872 kubelet[2582]: I1216 14:14:43.948855 2582 state_mem.go:36] "Initialized new in-memory state store" Dec 16 14:14:43.946000 audit[2615]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffd4e528a40 a2=0 a3=0 items=0 ppid=2582 pid=2615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:43.946000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 16 14:14:43.949907 kubelet[2582]: I1216 14:14:43.949649 2582 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 14:14:43.949907 kubelet[2582]: I1216 14:14:43.949735 2582 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 14:14:43.949907 kubelet[2582]: I1216 14:14:43.949769 2582 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 14:14:43.949907 kubelet[2582]: I1216 14:14:43.949787 2582 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 14:14:43.949907 kubelet[2582]: E1216 14:14:43.949863 2582 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 14:14:43.950000 audit[2617]: NETFILTER_CFG table=mangle:51 family=2 entries=1 op=nft_register_chain pid=2617 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:43.950000 audit[2617]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd24a09970 a2=0 a3=0 items=0 ppid=2582 pid=2617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:43.950000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 14:14:43.952104 kubelet[2582]: I1216 14:14:43.951772 2582 policy_none.go:49] "None policy: Start" Dec 16 14:14:43.952104 kubelet[2582]: I1216 14:14:43.951820 2582 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 14:14:43.952104 kubelet[2582]: I1216 14:14:43.951851 2582 state_mem.go:35] "Initializing new in-memory state store" Dec 16 14:14:43.951000 audit[2618]: NETFILTER_CFG table=nat:52 family=2 entries=1 op=nft_register_chain pid=2618 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:43.951000 audit[2618]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce8bcd3a0 a2=0 a3=0 items=0 ppid=2582 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:43.951000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 14:14:43.953000 audit[2619]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=2619 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:43.953000 audit[2619]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe2fe54aa0 a2=0 a3=0 items=0 ppid=2582 pid=2619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:43.953000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 14:14:43.955772 kubelet[2582]: E1216 14:14:43.955704 2582 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.52.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.52.194:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 14:14:43.964871 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 14:14:43.982135 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 14:14:43.987616 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 14:14:43.991373 kubelet[2582]: E1216 14:14:43.991321 2582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-6slrx.gb1.brightbox.com\" not found" Dec 16 14:14:44.013126 kubelet[2582]: E1216 14:14:44.012964 2582 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 14:14:44.014845 kubelet[2582]: I1216 14:14:44.013621 2582 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 14:14:44.014845 kubelet[2582]: I1216 14:14:44.014254 2582 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 14:14:44.014845 kubelet[2582]: I1216 14:14:44.014715 2582 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 14:14:44.018552 kubelet[2582]: E1216 14:14:44.018509 2582 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 14:14:44.018671 kubelet[2582]: E1216 14:14:44.018606 2582 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-6slrx.gb1.brightbox.com\" not found" Dec 16 14:14:44.070047 systemd[1]: Created slice kubepods-burstable-podad7baca014429577fe3301cf0912fedf.slice - libcontainer container kubepods-burstable-podad7baca014429577fe3301cf0912fedf.slice. Dec 16 14:14:44.083755 kubelet[2582]: E1216 14:14:44.083674 2582 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-6slrx.gb1.brightbox.com\" not found" node="srv-6slrx.gb1.brightbox.com" Dec 16 14:14:44.087849 systemd[1]: Created slice kubepods-burstable-podc91136d2a857877877d7a3e0220dc573.slice - libcontainer container kubepods-burstable-podc91136d2a857877877d7a3e0220dc573.slice. Dec 16 14:14:44.089376 kubelet[2582]: I1216 14:14:44.089017 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ffab8ea880a30539b75a2fbd8dc0269a-kubeconfig\") pod \"kube-scheduler-srv-6slrx.gb1.brightbox.com\" (UID: \"ffab8ea880a30539b75a2fbd8dc0269a\") " pod="kube-system/kube-scheduler-srv-6slrx.gb1.brightbox.com" Dec 16 14:14:44.089376 kubelet[2582]: I1216 14:14:44.089095 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ad7baca014429577fe3301cf0912fedf-usr-share-ca-certificates\") pod \"kube-apiserver-srv-6slrx.gb1.brightbox.com\" (UID: \"ad7baca014429577fe3301cf0912fedf\") " pod="kube-system/kube-apiserver-srv-6slrx.gb1.brightbox.com" Dec 16 14:14:44.089376 kubelet[2582]: I1216 14:14:44.089154 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c91136d2a857877877d7a3e0220dc573-ca-certs\") pod \"kube-controller-manager-srv-6slrx.gb1.brightbox.com\" (UID: \"c91136d2a857877877d7a3e0220dc573\") " pod="kube-system/kube-controller-manager-srv-6slrx.gb1.brightbox.com" Dec 16 14:14:44.089376 kubelet[2582]: I1216 14:14:44.089219 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c91136d2a857877877d7a3e0220dc573-flexvolume-dir\") pod \"kube-controller-manager-srv-6slrx.gb1.brightbox.com\" (UID: \"c91136d2a857877877d7a3e0220dc573\") " pod="kube-system/kube-controller-manager-srv-6slrx.gb1.brightbox.com" Dec 16 14:14:44.089376 kubelet[2582]: I1216 14:14:44.089249 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ad7baca014429577fe3301cf0912fedf-ca-certs\") pod \"kube-apiserver-srv-6slrx.gb1.brightbox.com\" (UID: \"ad7baca014429577fe3301cf0912fedf\") " pod="kube-system/kube-apiserver-srv-6slrx.gb1.brightbox.com" Dec 16 14:14:44.089614 kubelet[2582]: I1216 14:14:44.089293 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ad7baca014429577fe3301cf0912fedf-k8s-certs\") pod \"kube-apiserver-srv-6slrx.gb1.brightbox.com\" (UID: \"ad7baca014429577fe3301cf0912fedf\") " pod="kube-system/kube-apiserver-srv-6slrx.gb1.brightbox.com" Dec 16 14:14:44.089614 kubelet[2582]: I1216 14:14:44.089320 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c91136d2a857877877d7a3e0220dc573-k8s-certs\") pod \"kube-controller-manager-srv-6slrx.gb1.brightbox.com\" (UID: \"c91136d2a857877877d7a3e0220dc573\") " pod="kube-system/kube-controller-manager-srv-6slrx.gb1.brightbox.com" Dec 16 14:14:44.089614 kubelet[2582]: I1216 14:14:44.089345 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c91136d2a857877877d7a3e0220dc573-kubeconfig\") pod \"kube-controller-manager-srv-6slrx.gb1.brightbox.com\" (UID: \"c91136d2a857877877d7a3e0220dc573\") " pod="kube-system/kube-controller-manager-srv-6slrx.gb1.brightbox.com" Dec 16 14:14:44.089614 kubelet[2582]: I1216 14:14:44.089393 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c91136d2a857877877d7a3e0220dc573-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-6slrx.gb1.brightbox.com\" (UID: \"c91136d2a857877877d7a3e0220dc573\") " pod="kube-system/kube-controller-manager-srv-6slrx.gb1.brightbox.com" Dec 16 14:14:44.092416 kubelet[2582]: E1216 14:14:44.092379 2582 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.52.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-6slrx.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.52.194:6443: connect: connection refused" interval="400ms" Dec 16 14:14:44.097338 kubelet[2582]: E1216 14:14:44.097308 2582 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-6slrx.gb1.brightbox.com\" not found" node="srv-6slrx.gb1.brightbox.com" Dec 16 14:14:44.101653 systemd[1]: Created slice kubepods-burstable-podffab8ea880a30539b75a2fbd8dc0269a.slice - libcontainer container kubepods-burstable-podffab8ea880a30539b75a2fbd8dc0269a.slice. Dec 16 14:14:44.105198 kubelet[2582]: E1216 14:14:44.104887 2582 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-6slrx.gb1.brightbox.com\" not found" node="srv-6slrx.gb1.brightbox.com" Dec 16 14:14:44.117620 kubelet[2582]: I1216 14:14:44.117584 2582 kubelet_node_status.go:75] "Attempting to register node" node="srv-6slrx.gb1.brightbox.com" Dec 16 14:14:44.118696 kubelet[2582]: E1216 14:14:44.118667 2582 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.52.194:6443/api/v1/nodes\": dial tcp 10.230.52.194:6443: connect: connection refused" node="srv-6slrx.gb1.brightbox.com" Dec 16 14:14:44.322302 kubelet[2582]: I1216 14:14:44.322136 2582 kubelet_node_status.go:75] "Attempting to register node" node="srv-6slrx.gb1.brightbox.com" Dec 16 14:14:44.322655 kubelet[2582]: E1216 14:14:44.322623 2582 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.52.194:6443/api/v1/nodes\": dial tcp 10.230.52.194:6443: connect: connection refused" node="srv-6slrx.gb1.brightbox.com" Dec 16 14:14:44.386833 containerd[1683]: time="2025-12-16T14:14:44.386755392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-6slrx.gb1.brightbox.com,Uid:ad7baca014429577fe3301cf0912fedf,Namespace:kube-system,Attempt:0,}" Dec 16 14:14:44.399496 containerd[1683]: time="2025-12-16T14:14:44.399410174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-6slrx.gb1.brightbox.com,Uid:c91136d2a857877877d7a3e0220dc573,Namespace:kube-system,Attempt:0,}" Dec 16 14:14:44.407270 containerd[1683]: time="2025-12-16T14:14:44.407061709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-6slrx.gb1.brightbox.com,Uid:ffab8ea880a30539b75a2fbd8dc0269a,Namespace:kube-system,Attempt:0,}" Dec 16 14:14:44.497226 kubelet[2582]: E1216 14:14:44.495663 2582 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.52.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-6slrx.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.52.194:6443: connect: connection refused" interval="800ms" Dec 16 14:14:44.527719 containerd[1683]: time="2025-12-16T14:14:44.527427330Z" level=info msg="connecting to shim a0be5d8c64ac351b2f523cca36cfbed90a30a546eadc802e043e6f1f8807841c" address="unix:///run/containerd/s/2c5c7e3ea3903778680edd14a638a348b5248fc47294055cd2d144b317d8c3e2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 14:14:44.542699 containerd[1683]: time="2025-12-16T14:14:44.542606472Z" level=info msg="connecting to shim bcd870dad023aa1a2f4c78b9abd6636d1f268044852f3c08368628fc1800bdb8" address="unix:///run/containerd/s/964c5164b2d0b48f5d94604a3836906cf3cc43f7b5723ba49fdc344575a6db94" namespace=k8s.io protocol=ttrpc version=3 Dec 16 14:14:44.547270 containerd[1683]: time="2025-12-16T14:14:44.546831197Z" level=info msg="connecting to shim b318a0e7aaaaceecc250a90b9e11cf9f69998b9b5201ab9678966748a38eeb55" address="unix:///run/containerd/s/d07dbd4e7219525b2dd437db468bbbd0c61bd9433d9016f7d25ba25ff33495e6" namespace=k8s.io protocol=ttrpc version=3 Dec 16 14:14:44.686505 systemd[1]: Started cri-containerd-bcd870dad023aa1a2f4c78b9abd6636d1f268044852f3c08368628fc1800bdb8.scope - libcontainer container bcd870dad023aa1a2f4c78b9abd6636d1f268044852f3c08368628fc1800bdb8. Dec 16 14:14:44.712258 systemd[1]: Started cri-containerd-a0be5d8c64ac351b2f523cca36cfbed90a30a546eadc802e043e6f1f8807841c.scope - libcontainer container a0be5d8c64ac351b2f523cca36cfbed90a30a546eadc802e043e6f1f8807841c. Dec 16 14:14:44.718956 systemd[1]: Started cri-containerd-b318a0e7aaaaceecc250a90b9e11cf9f69998b9b5201ab9678966748a38eeb55.scope - libcontainer container b318a0e7aaaaceecc250a90b9e11cf9f69998b9b5201ab9678966748a38eeb55. Dec 16 14:14:44.729267 kubelet[2582]: I1216 14:14:44.729151 2582 kubelet_node_status.go:75] "Attempting to register node" node="srv-6slrx.gb1.brightbox.com" Dec 16 14:14:44.730654 kubelet[2582]: E1216 14:14:44.730577 2582 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.52.194:6443/api/v1/nodes\": dial tcp 10.230.52.194:6443: connect: connection refused" node="srv-6slrx.gb1.brightbox.com" Dec 16 14:14:44.743000 audit: BPF prog-id=88 op=LOAD Dec 16 14:14:44.744000 audit: BPF prog-id=89 op=LOAD Dec 16 14:14:44.744000 audit[2679]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2645 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:44.744000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263643837306461643032336161316132663463373862396162643636 Dec 16 14:14:44.745000 audit: BPF prog-id=89 op=UNLOAD Dec 16 14:14:44.745000 audit[2679]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2645 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:44.745000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263643837306461643032336161316132663463373862396162643636 Dec 16 14:14:44.751000 audit: BPF prog-id=90 op=LOAD Dec 16 14:14:44.751000 audit[2679]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2645 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:44.751000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263643837306461643032336161316132663463373862396162643636 Dec 16 14:14:44.752000 audit: BPF prog-id=91 op=LOAD Dec 16 14:14:44.752000 audit[2679]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2645 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:44.756000 audit: BPF prog-id=92 op=LOAD Dec 16 14:14:44.752000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263643837306461643032336161316132663463373862396162643636 Dec 16 14:14:44.756000 audit: BPF prog-id=91 op=UNLOAD Dec 16 14:14:44.756000 audit[2679]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2645 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:44.757000 audit: BPF prog-id=93 op=LOAD Dec 16 14:14:44.757000 audit[2681]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2648 pid=2681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:44.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233313861306537616161616365656363323530613930623965313163 Dec 16 14:14:44.757000 audit: BPF prog-id=93 op=UNLOAD Dec 16 14:14:44.757000 audit[2681]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2648 pid=2681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:44.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233313861306537616161616365656363323530613930623965313163 Dec 16 14:14:44.757000 audit: BPF prog-id=94 op=LOAD Dec 16 14:14:44.757000 audit[2681]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2648 pid=2681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:44.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233313861306537616161616365656363323530613930623965313163 Dec 16 14:14:44.757000 audit: BPF prog-id=95 op=LOAD Dec 16 14:14:44.757000 audit[2681]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2648 pid=2681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:44.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233313861306537616161616365656363323530613930623965313163 Dec 16 14:14:44.758000 audit: BPF prog-id=95 op=UNLOAD Dec 16 14:14:44.758000 audit[2681]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2648 pid=2681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:44.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233313861306537616161616365656363323530613930623965313163 Dec 16 14:14:44.758000 audit: BPF prog-id=94 op=UNLOAD Dec 16 14:14:44.758000 audit[2681]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2648 pid=2681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:44.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233313861306537616161616365656363323530613930623965313163 Dec 16 14:14:44.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263643837306461643032336161316132663463373862396162643636 Dec 16 14:14:44.758000 audit: BPF prog-id=90 op=UNLOAD Dec 16 14:14:44.758000 audit[2679]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2645 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:44.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263643837306461643032336161316132663463373862396162643636 Dec 16 14:14:44.758000 audit: BPF prog-id=96 op=LOAD Dec 16 14:14:44.758000 audit[2681]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2648 pid=2681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:44.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233313861306537616161616365656363323530613930623965313163 Dec 16 14:14:44.761000 audit: BPF prog-id=97 op=LOAD Dec 16 14:14:44.761000 audit[2679]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2645 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:44.761000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263643837306461643032336161316132663463373862396162643636 Dec 16 14:14:44.808000 audit: BPF prog-id=98 op=LOAD Dec 16 14:14:44.810000 audit: BPF prog-id=99 op=LOAD Dec 16 14:14:44.810000 audit[2663]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2641 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:44.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130626535643863363461633335316232663532336363613336636662 Dec 16 14:14:44.811000 audit: BPF prog-id=99 op=UNLOAD Dec 16 14:14:44.811000 audit[2663]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2641 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:44.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130626535643863363461633335316232663532336363613336636662 Dec 16 14:14:44.812000 audit: BPF prog-id=100 op=LOAD Dec 16 14:14:44.812000 audit[2663]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2641 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:44.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130626535643863363461633335316232663532336363613336636662 Dec 16 14:14:44.812000 audit: BPF prog-id=101 op=LOAD Dec 16 14:14:44.812000 audit[2663]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2641 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:44.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130626535643863363461633335316232663532336363613336636662 Dec 16 14:14:44.815000 audit: BPF prog-id=101 op=UNLOAD Dec 16 14:14:44.815000 audit[2663]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2641 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:44.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130626535643863363461633335316232663532336363613336636662 Dec 16 14:14:44.815000 audit: BPF prog-id=100 op=UNLOAD Dec 16 14:14:44.815000 audit[2663]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2641 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:44.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130626535643863363461633335316232663532336363613336636662 Dec 16 14:14:44.815000 audit: BPF prog-id=102 op=LOAD Dec 16 14:14:44.815000 audit[2663]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2641 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:44.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130626535643863363461633335316232663532336363613336636662 Dec 16 14:14:44.852364 containerd[1683]: time="2025-12-16T14:14:44.852298649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-6slrx.gb1.brightbox.com,Uid:ad7baca014429577fe3301cf0912fedf,Namespace:kube-system,Attempt:0,} returns sandbox id \"b318a0e7aaaaceecc250a90b9e11cf9f69998b9b5201ab9678966748a38eeb55\"" Dec 16 14:14:44.853090 containerd[1683]: time="2025-12-16T14:14:44.853042544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-6slrx.gb1.brightbox.com,Uid:c91136d2a857877877d7a3e0220dc573,Namespace:kube-system,Attempt:0,} returns sandbox id \"bcd870dad023aa1a2f4c78b9abd6636d1f268044852f3c08368628fc1800bdb8\"" Dec 16 14:14:44.855704 kubelet[2582]: E1216 14:14:44.855421 2582 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.52.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.52.194:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 14:14:44.861992 containerd[1683]: time="2025-12-16T14:14:44.861922998Z" level=info msg="CreateContainer within sandbox \"bcd870dad023aa1a2f4c78b9abd6636d1f268044852f3c08368628fc1800bdb8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 14:14:44.865869 containerd[1683]: time="2025-12-16T14:14:44.865826251Z" level=info msg="CreateContainer within sandbox \"b318a0e7aaaaceecc250a90b9e11cf9f69998b9b5201ab9678966748a38eeb55\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 14:14:44.896646 containerd[1683]: time="2025-12-16T14:14:44.896533928Z" level=info msg="Container b9bea7e3160a64a02addaa58588b6e5c619e72556c00ccf3271e7ae00cd36a5a: CDI devices from CRI Config.CDIDevices: []" Dec 16 14:14:44.897525 containerd[1683]: time="2025-12-16T14:14:44.897479818Z" level=info msg="Container ee5361d2959ac9043fa3308ea31691339f3e3b970fb6beaa254cef363a8be2c5: CDI devices from CRI Config.CDIDevices: []" Dec 16 14:14:44.909843 containerd[1683]: time="2025-12-16T14:14:44.909752208Z" level=info msg="CreateContainer within sandbox \"b318a0e7aaaaceecc250a90b9e11cf9f69998b9b5201ab9678966748a38eeb55\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ee5361d2959ac9043fa3308ea31691339f3e3b970fb6beaa254cef363a8be2c5\"" Dec 16 14:14:44.911196 containerd[1683]: time="2025-12-16T14:14:44.911112765Z" level=info msg="CreateContainer within sandbox \"bcd870dad023aa1a2f4c78b9abd6636d1f268044852f3c08368628fc1800bdb8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b9bea7e3160a64a02addaa58588b6e5c619e72556c00ccf3271e7ae00cd36a5a\"" Dec 16 14:14:44.911453 containerd[1683]: time="2025-12-16T14:14:44.911385120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-6slrx.gb1.brightbox.com,Uid:ffab8ea880a30539b75a2fbd8dc0269a,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0be5d8c64ac351b2f523cca36cfbed90a30a546eadc802e043e6f1f8807841c\"" Dec 16 14:14:44.912006 containerd[1683]: time="2025-12-16T14:14:44.911782596Z" level=info msg="StartContainer for \"b9bea7e3160a64a02addaa58588b6e5c619e72556c00ccf3271e7ae00cd36a5a\"" Dec 16 14:14:44.912231 containerd[1683]: time="2025-12-16T14:14:44.912204619Z" level=info msg="StartContainer for \"ee5361d2959ac9043fa3308ea31691339f3e3b970fb6beaa254cef363a8be2c5\"" Dec 16 14:14:44.913662 containerd[1683]: time="2025-12-16T14:14:44.913579545Z" level=info msg="connecting to shim b9bea7e3160a64a02addaa58588b6e5c619e72556c00ccf3271e7ae00cd36a5a" address="unix:///run/containerd/s/964c5164b2d0b48f5d94604a3836906cf3cc43f7b5723ba49fdc344575a6db94" protocol=ttrpc version=3 Dec 16 14:14:44.914675 containerd[1683]: time="2025-12-16T14:14:44.914540392Z" level=info msg="connecting to shim ee5361d2959ac9043fa3308ea31691339f3e3b970fb6beaa254cef363a8be2c5" address="unix:///run/containerd/s/d07dbd4e7219525b2dd437db468bbbd0c61bd9433d9016f7d25ba25ff33495e6" protocol=ttrpc version=3 Dec 16 14:14:44.921211 containerd[1683]: time="2025-12-16T14:14:44.921145993Z" level=info msg="CreateContainer within sandbox \"a0be5d8c64ac351b2f523cca36cfbed90a30a546eadc802e043e6f1f8807841c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 14:14:44.931330 containerd[1683]: time="2025-12-16T14:14:44.931258340Z" level=info msg="Container 5eb3e9038a6a5852510d0558ddcd88bc8f3d4247ec69c30de01c9a31215b6e1c: CDI devices from CRI Config.CDIDevices: []" Dec 16 14:14:44.947869 containerd[1683]: time="2025-12-16T14:14:44.947797341Z" level=info msg="CreateContainer within sandbox \"a0be5d8c64ac351b2f523cca36cfbed90a30a546eadc802e043e6f1f8807841c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5eb3e9038a6a5852510d0558ddcd88bc8f3d4247ec69c30de01c9a31215b6e1c\"" Dec 16 14:14:44.950585 containerd[1683]: time="2025-12-16T14:14:44.949632724Z" level=info msg="StartContainer for \"5eb3e9038a6a5852510d0558ddcd88bc8f3d4247ec69c30de01c9a31215b6e1c\"" Dec 16 14:14:44.956111 systemd[1]: Started cri-containerd-ee5361d2959ac9043fa3308ea31691339f3e3b970fb6beaa254cef363a8be2c5.scope - libcontainer container ee5361d2959ac9043fa3308ea31691339f3e3b970fb6beaa254cef363a8be2c5. Dec 16 14:14:44.961346 containerd[1683]: time="2025-12-16T14:14:44.961259098Z" level=info msg="connecting to shim 5eb3e9038a6a5852510d0558ddcd88bc8f3d4247ec69c30de01c9a31215b6e1c" address="unix:///run/containerd/s/2c5c7e3ea3903778680edd14a638a348b5248fc47294055cd2d144b317d8c3e2" protocol=ttrpc version=3 Dec 16 14:14:44.971697 systemd[1]: Started cri-containerd-b9bea7e3160a64a02addaa58588b6e5c619e72556c00ccf3271e7ae00cd36a5a.scope - libcontainer container b9bea7e3160a64a02addaa58588b6e5c619e72556c00ccf3271e7ae00cd36a5a. Dec 16 14:14:45.005000 audit: BPF prog-id=103 op=LOAD Dec 16 14:14:45.007000 audit: BPF prog-id=104 op=LOAD Dec 16 14:14:45.007000 audit[2760]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2648 pid=2760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:45.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565353336316432393539616339303433666133333038656133313639 Dec 16 14:14:45.008000 audit: BPF prog-id=104 op=UNLOAD Dec 16 14:14:45.008000 audit[2760]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2648 pid=2760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:45.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565353336316432393539616339303433666133333038656133313639 Dec 16 14:14:45.009000 audit: BPF prog-id=105 op=LOAD Dec 16 14:14:45.009000 audit[2760]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2648 pid=2760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:45.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565353336316432393539616339303433666133333038656133313639 Dec 16 14:14:45.011000 audit: BPF prog-id=106 op=LOAD Dec 16 14:14:45.011000 audit[2760]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2648 pid=2760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:45.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565353336316432393539616339303433666133333038656133313639 Dec 16 14:14:45.011000 audit: BPF prog-id=106 op=UNLOAD Dec 16 14:14:45.011000 audit[2760]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2648 pid=2760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:45.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565353336316432393539616339303433666133333038656133313639 Dec 16 14:14:45.012000 audit: BPF prog-id=105 op=UNLOAD Dec 16 14:14:45.012000 audit[2760]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2648 pid=2760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:45.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565353336316432393539616339303433666133333038656133313639 Dec 16 14:14:45.012000 audit: BPF prog-id=107 op=LOAD Dec 16 14:14:45.012000 audit[2760]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2648 pid=2760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:45.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565353336316432393539616339303433666133333038656133313639 Dec 16 14:14:45.024523 systemd[1]: Started cri-containerd-5eb3e9038a6a5852510d0558ddcd88bc8f3d4247ec69c30de01c9a31215b6e1c.scope - libcontainer container 5eb3e9038a6a5852510d0558ddcd88bc8f3d4247ec69c30de01c9a31215b6e1c. Dec 16 14:14:45.028000 audit: BPF prog-id=108 op=LOAD Dec 16 14:14:45.031000 audit: BPF prog-id=109 op=LOAD Dec 16 14:14:45.031000 audit[2761]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2645 pid=2761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:45.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239626561376533313630613634613032616464616135383538386236 Dec 16 14:14:45.031000 audit: BPF prog-id=109 op=UNLOAD Dec 16 14:14:45.031000 audit[2761]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2645 pid=2761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:45.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239626561376533313630613634613032616464616135383538386236 Dec 16 14:14:45.031000 audit: BPF prog-id=110 op=LOAD Dec 16 14:14:45.031000 audit[2761]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2645 pid=2761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:45.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239626561376533313630613634613032616464616135383538386236 Dec 16 14:14:45.031000 audit: BPF prog-id=111 op=LOAD Dec 16 14:14:45.031000 audit[2761]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2645 pid=2761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:45.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239626561376533313630613634613032616464616135383538386236 Dec 16 14:14:45.031000 audit: BPF prog-id=111 op=UNLOAD Dec 16 14:14:45.031000 audit[2761]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2645 pid=2761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:45.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239626561376533313630613634613032616464616135383538386236 Dec 16 14:14:45.031000 audit: BPF prog-id=110 op=UNLOAD Dec 16 14:14:45.031000 audit[2761]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2645 pid=2761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:45.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239626561376533313630613634613032616464616135383538386236 Dec 16 14:14:45.031000 audit: BPF prog-id=112 op=LOAD Dec 16 14:14:45.031000 audit[2761]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2645 pid=2761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:45.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239626561376533313630613634613032616464616135383538386236 Dec 16 14:14:45.069000 audit: BPF prog-id=113 op=LOAD Dec 16 14:14:45.074000 audit: BPF prog-id=114 op=LOAD Dec 16 14:14:45.074000 audit[2793]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2641 pid=2793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:45.074000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565623365393033386136613538353235313064303535386464636438 Dec 16 14:14:45.076000 audit: BPF prog-id=114 op=UNLOAD Dec 16 14:14:45.076000 audit[2793]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2641 pid=2793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:45.076000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565623365393033386136613538353235313064303535386464636438 Dec 16 14:14:45.077000 audit: BPF prog-id=115 op=LOAD Dec 16 14:14:45.077000 audit[2793]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2641 pid=2793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:45.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565623365393033386136613538353235313064303535386464636438 Dec 16 14:14:45.077000 audit: BPF prog-id=116 op=LOAD Dec 16 14:14:45.077000 audit[2793]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=2641 pid=2793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:45.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565623365393033386136613538353235313064303535386464636438 Dec 16 14:14:45.077000 audit: BPF prog-id=116 op=UNLOAD Dec 16 14:14:45.077000 audit[2793]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2641 pid=2793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:45.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565623365393033386136613538353235313064303535386464636438 Dec 16 14:14:45.078000 audit: BPF prog-id=115 op=UNLOAD Dec 16 14:14:45.078000 audit[2793]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2641 pid=2793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:45.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565623365393033386136613538353235313064303535386464636438 Dec 16 14:14:45.078000 audit: BPF prog-id=117 op=LOAD Dec 16 14:14:45.078000 audit[2793]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=2641 pid=2793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:45.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565623365393033386136613538353235313064303535386464636438 Dec 16 14:14:45.112984 containerd[1683]: time="2025-12-16T14:14:45.112912605Z" level=info msg="StartContainer for \"ee5361d2959ac9043fa3308ea31691339f3e3b970fb6beaa254cef363a8be2c5\" returns successfully" Dec 16 14:14:45.142080 containerd[1683]: time="2025-12-16T14:14:45.142011662Z" level=info msg="StartContainer for \"b9bea7e3160a64a02addaa58588b6e5c619e72556c00ccf3271e7ae00cd36a5a\" returns successfully" Dec 16 14:14:45.181470 containerd[1683]: time="2025-12-16T14:14:45.181375285Z" level=info msg="StartContainer for \"5eb3e9038a6a5852510d0558ddcd88bc8f3d4247ec69c30de01c9a31215b6e1c\" returns successfully" Dec 16 14:14:45.297856 kubelet[2582]: E1216 14:14:45.297625 2582 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.52.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-6slrx.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.52.194:6443: connect: connection refused" interval="1.6s" Dec 16 14:14:45.412950 kubelet[2582]: E1216 14:14:45.412879 2582 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.52.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-6slrx.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.52.194:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 14:14:45.421859 kubelet[2582]: E1216 14:14:45.421777 2582 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.52.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.52.194:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 14:14:45.426058 kubelet[2582]: E1216 14:14:45.425701 2582 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.52.194:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.52.194:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 14:14:45.535985 kubelet[2582]: I1216 14:14:45.535922 2582 kubelet_node_status.go:75] "Attempting to register node" node="srv-6slrx.gb1.brightbox.com" Dec 16 14:14:45.536618 kubelet[2582]: E1216 14:14:45.536439 2582 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.52.194:6443/api/v1/nodes\": dial tcp 10.230.52.194:6443: connect: connection refused" node="srv-6slrx.gb1.brightbox.com" Dec 16 14:14:45.991733 kubelet[2582]: E1216 14:14:45.991678 2582 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-6slrx.gb1.brightbox.com\" not found" node="srv-6slrx.gb1.brightbox.com" Dec 16 14:14:45.998537 kubelet[2582]: E1216 14:14:45.998370 2582 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-6slrx.gb1.brightbox.com\" not found" node="srv-6slrx.gb1.brightbox.com" Dec 16 14:14:45.999346 kubelet[2582]: E1216 14:14:45.999323 2582 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-6slrx.gb1.brightbox.com\" not found" node="srv-6slrx.gb1.brightbox.com" Dec 16 14:14:47.006143 kubelet[2582]: E1216 14:14:47.005262 2582 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-6slrx.gb1.brightbox.com\" not found" node="srv-6slrx.gb1.brightbox.com" Dec 16 14:14:47.006143 kubelet[2582]: E1216 14:14:47.005878 2582 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-6slrx.gb1.brightbox.com\" not found" node="srv-6slrx.gb1.brightbox.com" Dec 16 14:14:47.007821 kubelet[2582]: E1216 14:14:47.007796 2582 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-6slrx.gb1.brightbox.com\" not found" node="srv-6slrx.gb1.brightbox.com" Dec 16 14:14:47.141525 kubelet[2582]: I1216 14:14:47.141475 2582 kubelet_node_status.go:75] "Attempting to register node" node="srv-6slrx.gb1.brightbox.com" Dec 16 14:14:48.017225 kubelet[2582]: E1216 14:14:48.015720 2582 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-6slrx.gb1.brightbox.com\" not found" node="srv-6slrx.gb1.brightbox.com" Dec 16 14:14:48.017225 kubelet[2582]: E1216 14:14:48.015792 2582 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-6slrx.gb1.brightbox.com\" not found" node="srv-6slrx.gb1.brightbox.com" Dec 16 14:14:48.156017 kubelet[2582]: E1216 14:14:48.155818 2582 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-6slrx.gb1.brightbox.com\" not found" node="srv-6slrx.gb1.brightbox.com" Dec 16 14:14:48.172647 kubelet[2582]: I1216 14:14:48.172599 2582 kubelet_node_status.go:78] "Successfully registered node" node="srv-6slrx.gb1.brightbox.com" Dec 16 14:14:48.173051 kubelet[2582]: E1216 14:14:48.172870 2582 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-6slrx.gb1.brightbox.com\": node \"srv-6slrx.gb1.brightbox.com\" not found" Dec 16 14:14:48.207770 kubelet[2582]: E1216 14:14:48.207715 2582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-6slrx.gb1.brightbox.com\" not found" Dec 16 14:14:48.209901 kubelet[2582]: E1216 14:14:48.209610 2582 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-6slrx.gb1.brightbox.com.1881b7ad4c60ac7c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-6slrx.gb1.brightbox.com,UID:srv-6slrx.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-6slrx.gb1.brightbox.com,},FirstTimestamp:2025-12-16 14:14:43.844574332 +0000 UTC m=+1.151839282,LastTimestamp:2025-12-16 14:14:43.844574332 +0000 UTC m=+1.151839282,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-6slrx.gb1.brightbox.com,}" Dec 16 14:14:48.308382 kubelet[2582]: E1216 14:14:48.307995 2582 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-6slrx.gb1.brightbox.com\" not found" Dec 16 14:14:48.392661 kubelet[2582]: I1216 14:14:48.392610 2582 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-6slrx.gb1.brightbox.com" Dec 16 14:14:48.401020 kubelet[2582]: E1216 14:14:48.400979 2582 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-6slrx.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-6slrx.gb1.brightbox.com" Dec 16 14:14:48.401727 kubelet[2582]: I1216 14:14:48.401143 2582 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-6slrx.gb1.brightbox.com" Dec 16 14:14:48.403210 kubelet[2582]: E1216 14:14:48.403124 2582 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-6slrx.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-6slrx.gb1.brightbox.com" Dec 16 14:14:48.403551 kubelet[2582]: I1216 14:14:48.403317 2582 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-6slrx.gb1.brightbox.com" Dec 16 14:14:48.404989 kubelet[2582]: E1216 14:14:48.404955 2582 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-6slrx.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-6slrx.gb1.brightbox.com" Dec 16 14:14:48.835742 kubelet[2582]: I1216 14:14:48.835703 2582 apiserver.go:52] "Watching apiserver" Dec 16 14:14:48.887249 kubelet[2582]: I1216 14:14:48.887210 2582 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 14:14:49.016303 kubelet[2582]: I1216 14:14:49.016253 2582 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-6slrx.gb1.brightbox.com" Dec 16 14:14:49.023448 kubelet[2582]: I1216 14:14:49.023394 2582 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 14:14:50.257483 systemd[1]: Reload requested from client PID 2863 ('systemctl') (unit session-11.scope)... Dec 16 14:14:50.257512 systemd[1]: Reloading... Dec 16 14:14:50.397147 zram_generator::config[2907]: No configuration found. Dec 16 14:14:50.768275 systemd[1]: Reloading finished in 510 ms. Dec 16 14:14:50.817444 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 14:14:50.831617 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 14:14:50.832085 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 14:14:50.843505 kernel: kauditd_printk_skb: 204 callbacks suppressed Dec 16 14:14:50.843751 kernel: audit: type=1131 audit(1765894490.830:404): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:14:50.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:14:50.832214 systemd[1]: kubelet.service: Consumed 1.731s CPU time, 130M memory peak. Dec 16 14:14:50.848011 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 14:14:50.848000 audit: BPF prog-id=118 op=LOAD Dec 16 14:14:50.852215 kernel: audit: type=1334 audit(1765894490.848:405): prog-id=118 op=LOAD Dec 16 14:14:50.848000 audit: BPF prog-id=80 op=UNLOAD Dec 16 14:14:50.855207 kernel: audit: type=1334 audit(1765894490.848:406): prog-id=80 op=UNLOAD Dec 16 14:14:50.849000 audit: BPF prog-id=119 op=LOAD Dec 16 14:14:50.858199 kernel: audit: type=1334 audit(1765894490.849:407): prog-id=119 op=LOAD Dec 16 14:14:50.861977 kernel: audit: type=1334 audit(1765894490.849:408): prog-id=120 op=LOAD Dec 16 14:14:50.862048 kernel: audit: type=1334 audit(1765894490.849:409): prog-id=67 op=UNLOAD Dec 16 14:14:50.849000 audit: BPF prog-id=120 op=LOAD Dec 16 14:14:50.849000 audit: BPF prog-id=67 op=UNLOAD Dec 16 14:14:50.868171 kernel: audit: type=1334 audit(1765894490.849:410): prog-id=68 op=UNLOAD Dec 16 14:14:50.868257 kernel: audit: type=1334 audit(1765894490.851:411): prog-id=121 op=LOAD Dec 16 14:14:50.849000 audit: BPF prog-id=68 op=UNLOAD Dec 16 14:14:50.851000 audit: BPF prog-id=121 op=LOAD Dec 16 14:14:50.851000 audit: BPF prog-id=70 op=UNLOAD Dec 16 14:14:50.851000 audit: BPF prog-id=122 op=LOAD Dec 16 14:14:50.870438 kernel: audit: type=1334 audit(1765894490.851:412): prog-id=70 op=UNLOAD Dec 16 14:14:50.870501 kernel: audit: type=1334 audit(1765894490.851:413): prog-id=122 op=LOAD Dec 16 14:14:50.851000 audit: BPF prog-id=123 op=LOAD Dec 16 14:14:50.851000 audit: BPF prog-id=71 op=UNLOAD Dec 16 14:14:50.851000 audit: BPF prog-id=72 op=UNLOAD Dec 16 14:14:50.853000 audit: BPF prog-id=124 op=LOAD Dec 16 14:14:50.853000 audit: BPF prog-id=69 op=UNLOAD Dec 16 14:14:50.854000 audit: BPF prog-id=125 op=LOAD Dec 16 14:14:50.854000 audit: BPF prog-id=81 op=UNLOAD Dec 16 14:14:50.855000 audit: BPF prog-id=126 op=LOAD Dec 16 14:14:50.855000 audit: BPF prog-id=85 op=UNLOAD Dec 16 14:14:50.855000 audit: BPF prog-id=127 op=LOAD Dec 16 14:14:50.855000 audit: BPF prog-id=128 op=LOAD Dec 16 14:14:50.855000 audit: BPF prog-id=86 op=UNLOAD Dec 16 14:14:50.855000 audit: BPF prog-id=87 op=UNLOAD Dec 16 14:14:50.857000 audit: BPF prog-id=129 op=LOAD Dec 16 14:14:50.857000 audit: BPF prog-id=73 op=UNLOAD Dec 16 14:14:50.859000 audit: BPF prog-id=130 op=LOAD Dec 16 14:14:50.859000 audit: BPF prog-id=74 op=UNLOAD Dec 16 14:14:50.859000 audit: BPF prog-id=131 op=LOAD Dec 16 14:14:50.859000 audit: BPF prog-id=132 op=LOAD Dec 16 14:14:50.859000 audit: BPF prog-id=75 op=UNLOAD Dec 16 14:14:50.859000 audit: BPF prog-id=76 op=UNLOAD Dec 16 14:14:50.860000 audit: BPF prog-id=133 op=LOAD Dec 16 14:14:50.860000 audit: BPF prog-id=82 op=UNLOAD Dec 16 14:14:50.860000 audit: BPF prog-id=134 op=LOAD Dec 16 14:14:50.860000 audit: BPF prog-id=135 op=LOAD Dec 16 14:14:50.860000 audit: BPF prog-id=83 op=UNLOAD Dec 16 14:14:50.860000 audit: BPF prog-id=84 op=UNLOAD Dec 16 14:14:50.862000 audit: BPF prog-id=136 op=LOAD Dec 16 14:14:50.862000 audit: BPF prog-id=77 op=UNLOAD Dec 16 14:14:50.862000 audit: BPF prog-id=137 op=LOAD Dec 16 14:14:50.862000 audit: BPF prog-id=138 op=LOAD Dec 16 14:14:50.862000 audit: BPF prog-id=78 op=UNLOAD Dec 16 14:14:50.862000 audit: BPF prog-id=79 op=UNLOAD Dec 16 14:14:51.192616 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 14:14:51.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:14:51.206776 (kubelet)[2975]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 14:14:51.320203 kubelet[2975]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 14:14:51.320203 kubelet[2975]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 14:14:51.320203 kubelet[2975]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 14:14:51.322771 kubelet[2975]: I1216 14:14:51.322646 2975 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 14:14:51.334560 kubelet[2975]: I1216 14:14:51.334096 2975 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 14:14:51.334560 kubelet[2975]: I1216 14:14:51.334139 2975 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 14:14:51.334805 kubelet[2975]: I1216 14:14:51.334613 2975 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 14:14:51.342494 kubelet[2975]: I1216 14:14:51.342440 2975 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 14:14:51.349651 kubelet[2975]: I1216 14:14:51.349044 2975 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 14:14:51.386330 kubelet[2975]: I1216 14:14:51.386238 2975 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 14:14:51.392436 kubelet[2975]: I1216 14:14:51.391974 2975 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 14:14:51.392436 kubelet[2975]: I1216 14:14:51.392389 2975 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 14:14:51.392691 kubelet[2975]: I1216 14:14:51.392431 2975 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-6slrx.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 14:14:51.396859 kubelet[2975]: I1216 14:14:51.396475 2975 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 14:14:51.396859 kubelet[2975]: I1216 14:14:51.396505 2975 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 14:14:51.396859 kubelet[2975]: I1216 14:14:51.396590 2975 state_mem.go:36] "Initialized new in-memory state store" Dec 16 14:14:51.397120 kubelet[2975]: I1216 14:14:51.397097 2975 kubelet.go:480] "Attempting to sync node with API server" Dec 16 14:14:51.398226 kubelet[2975]: I1216 14:14:51.398205 2975 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 14:14:51.410037 kubelet[2975]: I1216 14:14:51.409996 2975 kubelet.go:386] "Adding apiserver pod source" Dec 16 14:14:51.410258 kubelet[2975]: I1216 14:14:51.410238 2975 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 14:14:51.439174 kubelet[2975]: I1216 14:14:51.437772 2975 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 14:14:51.441199 kubelet[2975]: I1216 14:14:51.440511 2975 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 14:14:51.460042 kubelet[2975]: I1216 14:14:51.459906 2975 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 14:14:51.461290 kubelet[2975]: I1216 14:14:51.461260 2975 server.go:1289] "Started kubelet" Dec 16 14:14:51.467158 kubelet[2975]: I1216 14:14:51.462101 2975 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 14:14:51.468371 kubelet[2975]: I1216 14:14:51.465135 2975 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 14:14:51.470414 kubelet[2975]: I1216 14:14:51.470392 2975 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 14:14:51.471904 kubelet[2975]: I1216 14:14:51.471875 2975 server.go:317] "Adding debug handlers to kubelet server" Dec 16 14:14:51.473206 kubelet[2975]: I1216 14:14:51.472797 2975 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 14:14:51.473206 kubelet[2975]: I1216 14:14:51.473003 2975 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 14:14:51.481193 kubelet[2975]: I1216 14:14:51.481146 2975 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 14:14:51.485715 kubelet[2975]: I1216 14:14:51.485681 2975 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 14:14:51.485996 kubelet[2975]: I1216 14:14:51.485978 2975 reconciler.go:26] "Reconciler: start to sync state" Dec 16 14:14:51.488715 kubelet[2975]: I1216 14:14:51.488613 2975 factory.go:223] Registration of the systemd container factory successfully Dec 16 14:14:51.489331 kubelet[2975]: I1216 14:14:51.488962 2975 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 14:14:51.497108 kubelet[2975]: I1216 14:14:51.496894 2975 factory.go:223] Registration of the containerd container factory successfully Dec 16 14:14:51.539296 kubelet[2975]: I1216 14:14:51.539170 2975 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 14:14:51.551120 kubelet[2975]: I1216 14:14:51.551070 2975 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 14:14:51.551715 kubelet[2975]: I1216 14:14:51.551401 2975 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 14:14:51.552473 kubelet[2975]: I1216 14:14:51.552300 2975 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 14:14:51.552473 kubelet[2975]: I1216 14:14:51.552335 2975 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 14:14:51.553160 kubelet[2975]: E1216 14:14:51.552780 2975 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 14:14:51.631206 kubelet[2975]: I1216 14:14:51.628549 2975 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 14:14:51.631206 kubelet[2975]: I1216 14:14:51.628597 2975 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 14:14:51.631206 kubelet[2975]: I1216 14:14:51.628671 2975 state_mem.go:36] "Initialized new in-memory state store" Dec 16 14:14:51.631206 kubelet[2975]: I1216 14:14:51.628934 2975 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 14:14:51.631206 kubelet[2975]: I1216 14:14:51.628953 2975 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 14:14:51.631206 kubelet[2975]: I1216 14:14:51.628987 2975 policy_none.go:49] "None policy: Start" Dec 16 14:14:51.631206 kubelet[2975]: I1216 14:14:51.629009 2975 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 14:14:51.631206 kubelet[2975]: I1216 14:14:51.629033 2975 state_mem.go:35] "Initializing new in-memory state store" Dec 16 14:14:51.631799 kubelet[2975]: I1216 14:14:51.629169 2975 state_mem.go:75] "Updated machine memory state" Dec 16 14:14:51.641935 kubelet[2975]: E1216 14:14:51.641869 2975 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 14:14:51.643275 kubelet[2975]: I1216 14:14:51.642825 2975 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 14:14:51.645323 kubelet[2975]: I1216 14:14:51.644606 2975 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 14:14:51.651214 kubelet[2975]: I1216 14:14:51.650382 2975 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 14:14:51.661202 kubelet[2975]: E1216 14:14:51.661087 2975 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 14:14:51.667046 kubelet[2975]: I1216 14:14:51.666979 2975 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-6slrx.gb1.brightbox.com" Dec 16 14:14:51.668936 kubelet[2975]: I1216 14:14:51.668407 2975 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-6slrx.gb1.brightbox.com" Dec 16 14:14:51.670337 kubelet[2975]: I1216 14:14:51.668604 2975 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-6slrx.gb1.brightbox.com" Dec 16 14:14:51.689009 kubelet[2975]: I1216 14:14:51.688961 2975 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 14:14:51.693541 kubelet[2975]: I1216 14:14:51.693513 2975 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 14:14:51.693758 kubelet[2975]: E1216 14:14:51.693729 2975 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-6slrx.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-6slrx.gb1.brightbox.com" Dec 16 14:14:51.694010 kubelet[2975]: I1216 14:14:51.693987 2975 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 14:14:51.788131 kubelet[2975]: I1216 14:14:51.787836 2975 kubelet_node_status.go:75] "Attempting to register node" node="srv-6slrx.gb1.brightbox.com" Dec 16 14:14:51.788712 kubelet[2975]: I1216 14:14:51.788565 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c91136d2a857877877d7a3e0220dc573-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-6slrx.gb1.brightbox.com\" (UID: \"c91136d2a857877877d7a3e0220dc573\") " pod="kube-system/kube-controller-manager-srv-6slrx.gb1.brightbox.com" Dec 16 14:14:51.789416 kubelet[2975]: I1216 14:14:51.789328 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ffab8ea880a30539b75a2fbd8dc0269a-kubeconfig\") pod \"kube-scheduler-srv-6slrx.gb1.brightbox.com\" (UID: \"ffab8ea880a30539b75a2fbd8dc0269a\") " pod="kube-system/kube-scheduler-srv-6slrx.gb1.brightbox.com" Dec 16 14:14:51.789584 kubelet[2975]: I1216 14:14:51.789560 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ad7baca014429577fe3301cf0912fedf-ca-certs\") pod \"kube-apiserver-srv-6slrx.gb1.brightbox.com\" (UID: \"ad7baca014429577fe3301cf0912fedf\") " pod="kube-system/kube-apiserver-srv-6slrx.gb1.brightbox.com" Dec 16 14:14:51.789861 kubelet[2975]: I1216 14:14:51.789835 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ad7baca014429577fe3301cf0912fedf-usr-share-ca-certificates\") pod \"kube-apiserver-srv-6slrx.gb1.brightbox.com\" (UID: \"ad7baca014429577fe3301cf0912fedf\") " pod="kube-system/kube-apiserver-srv-6slrx.gb1.brightbox.com" Dec 16 14:14:51.791735 kubelet[2975]: I1216 14:14:51.789908 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c91136d2a857877877d7a3e0220dc573-k8s-certs\") pod \"kube-controller-manager-srv-6slrx.gb1.brightbox.com\" (UID: \"c91136d2a857877877d7a3e0220dc573\") " pod="kube-system/kube-controller-manager-srv-6slrx.gb1.brightbox.com" Dec 16 14:14:51.791735 kubelet[2975]: I1216 14:14:51.789960 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c91136d2a857877877d7a3e0220dc573-kubeconfig\") pod \"kube-controller-manager-srv-6slrx.gb1.brightbox.com\" (UID: \"c91136d2a857877877d7a3e0220dc573\") " pod="kube-system/kube-controller-manager-srv-6slrx.gb1.brightbox.com" Dec 16 14:14:51.791735 kubelet[2975]: I1216 14:14:51.789986 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ad7baca014429577fe3301cf0912fedf-k8s-certs\") pod \"kube-apiserver-srv-6slrx.gb1.brightbox.com\" (UID: \"ad7baca014429577fe3301cf0912fedf\") " pod="kube-system/kube-apiserver-srv-6slrx.gb1.brightbox.com" Dec 16 14:14:51.791735 kubelet[2975]: I1216 14:14:51.790012 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c91136d2a857877877d7a3e0220dc573-ca-certs\") pod \"kube-controller-manager-srv-6slrx.gb1.brightbox.com\" (UID: \"c91136d2a857877877d7a3e0220dc573\") " pod="kube-system/kube-controller-manager-srv-6slrx.gb1.brightbox.com" Dec 16 14:14:51.791735 kubelet[2975]: I1216 14:14:51.790037 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c91136d2a857877877d7a3e0220dc573-flexvolume-dir\") pod \"kube-controller-manager-srv-6slrx.gb1.brightbox.com\" (UID: \"c91136d2a857877877d7a3e0220dc573\") " pod="kube-system/kube-controller-manager-srv-6slrx.gb1.brightbox.com" Dec 16 14:14:51.805074 kubelet[2975]: I1216 14:14:51.804507 2975 kubelet_node_status.go:124] "Node was previously registered" node="srv-6slrx.gb1.brightbox.com" Dec 16 14:14:51.805074 kubelet[2975]: I1216 14:14:51.804623 2975 kubelet_node_status.go:78] "Successfully registered node" node="srv-6slrx.gb1.brightbox.com" Dec 16 14:14:52.418555 kubelet[2975]: I1216 14:14:52.418467 2975 apiserver.go:52] "Watching apiserver" Dec 16 14:14:52.486869 kubelet[2975]: I1216 14:14:52.486591 2975 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 14:14:52.598671 kubelet[2975]: I1216 14:14:52.598625 2975 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-6slrx.gb1.brightbox.com" Dec 16 14:14:52.610320 kubelet[2975]: I1216 14:14:52.610258 2975 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 14:14:52.610553 kubelet[2975]: E1216 14:14:52.610353 2975 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-6slrx.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-6slrx.gb1.brightbox.com" Dec 16 14:14:52.636778 kubelet[2975]: I1216 14:14:52.636682 2975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-6slrx.gb1.brightbox.com" podStartSLOduration=3.636654068 podStartE2EDuration="3.636654068s" podCreationTimestamp="2025-12-16 14:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 14:14:52.634469682 +0000 UTC m=+1.406111490" watchObservedRunningTime="2025-12-16 14:14:52.636654068 +0000 UTC m=+1.408295863" Dec 16 14:14:52.649219 kubelet[2975]: I1216 14:14:52.648878 2975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-6slrx.gb1.brightbox.com" podStartSLOduration=1.648857613 podStartE2EDuration="1.648857613s" podCreationTimestamp="2025-12-16 14:14:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 14:14:52.648323121 +0000 UTC m=+1.419964933" watchObservedRunningTime="2025-12-16 14:14:52.648857613 +0000 UTC m=+1.420499414" Dec 16 14:14:52.674355 kubelet[2975]: I1216 14:14:52.672811 2975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-6slrx.gb1.brightbox.com" podStartSLOduration=1.6727935999999999 podStartE2EDuration="1.6727936s" podCreationTimestamp="2025-12-16 14:14:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 14:14:52.660971896 +0000 UTC m=+1.432613707" watchObservedRunningTime="2025-12-16 14:14:52.6727936 +0000 UTC m=+1.444435404" Dec 16 14:14:57.070942 kubelet[2975]: I1216 14:14:57.070799 2975 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 14:14:57.072552 containerd[1683]: time="2025-12-16T14:14:57.072101904Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 14:14:57.072946 kubelet[2975]: I1216 14:14:57.072339 2975 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 14:14:57.874053 systemd[1]: Created slice kubepods-besteffort-podc52138ef_ea24_40db_a6fe_54c53dbb7374.slice - libcontainer container kubepods-besteffort-podc52138ef_ea24_40db_a6fe_54c53dbb7374.slice. Dec 16 14:14:57.935218 kubelet[2975]: I1216 14:14:57.934423 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c52138ef-ea24-40db-a6fe-54c53dbb7374-xtables-lock\") pod \"kube-proxy-9qsmk\" (UID: \"c52138ef-ea24-40db-a6fe-54c53dbb7374\") " pod="kube-system/kube-proxy-9qsmk" Dec 16 14:14:57.935218 kubelet[2975]: I1216 14:14:57.934490 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c52138ef-ea24-40db-a6fe-54c53dbb7374-lib-modules\") pod \"kube-proxy-9qsmk\" (UID: \"c52138ef-ea24-40db-a6fe-54c53dbb7374\") " pod="kube-system/kube-proxy-9qsmk" Dec 16 14:14:57.935218 kubelet[2975]: I1216 14:14:57.934525 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jb2j\" (UniqueName: \"kubernetes.io/projected/c52138ef-ea24-40db-a6fe-54c53dbb7374-kube-api-access-5jb2j\") pod \"kube-proxy-9qsmk\" (UID: \"c52138ef-ea24-40db-a6fe-54c53dbb7374\") " pod="kube-system/kube-proxy-9qsmk" Dec 16 14:14:57.935218 kubelet[2975]: I1216 14:14:57.934570 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c52138ef-ea24-40db-a6fe-54c53dbb7374-kube-proxy\") pod \"kube-proxy-9qsmk\" (UID: \"c52138ef-ea24-40db-a6fe-54c53dbb7374\") " pod="kube-system/kube-proxy-9qsmk" Dec 16 14:14:58.186174 containerd[1683]: time="2025-12-16T14:14:58.185335039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9qsmk,Uid:c52138ef-ea24-40db-a6fe-54c53dbb7374,Namespace:kube-system,Attempt:0,}" Dec 16 14:14:58.219218 containerd[1683]: time="2025-12-16T14:14:58.219144208Z" level=info msg="connecting to shim 9a01ca57122696a90bd14c896fe682c525f9a26942013bd2682fdbc9760a17ba" address="unix:///run/containerd/s/cf03772ff231f536667b35695b9f0cad8fe661426a45711af1c7fe29b1aed36c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 14:14:58.280843 systemd[1]: Started cri-containerd-9a01ca57122696a90bd14c896fe682c525f9a26942013bd2682fdbc9760a17ba.scope - libcontainer container 9a01ca57122696a90bd14c896fe682c525f9a26942013bd2682fdbc9760a17ba. Dec 16 14:14:58.356243 kernel: kauditd_printk_skb: 34 callbacks suppressed Dec 16 14:14:58.356532 kernel: audit: type=1334 audit(1765894498.349:448): prog-id=139 op=LOAD Dec 16 14:14:58.349000 audit: BPF prog-id=139 op=LOAD Dec 16 14:14:58.357000 audit: BPF prog-id=140 op=LOAD Dec 16 14:14:58.362259 kernel: audit: type=1334 audit(1765894498.357:449): prog-id=140 op=LOAD Dec 16 14:14:58.357000 audit[3045]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3034 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:58.368759 kernel: audit: type=1300 audit(1765894498.357:449): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3034 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:58.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961303163613537313232363936613930626431346338393666653638 Dec 16 14:14:58.376462 kernel: audit: type=1327 audit(1765894498.357:449): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961303163613537313232363936613930626431346338393666653638 Dec 16 14:14:58.357000 audit: BPF prog-id=140 op=UNLOAD Dec 16 14:14:58.384257 kernel: audit: type=1334 audit(1765894498.357:450): prog-id=140 op=UNLOAD Dec 16 14:14:58.384364 kernel: audit: type=1300 audit(1765894498.357:450): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3034 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:58.357000 audit[3045]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3034 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:58.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961303163613537313232363936613930626431346338393666653638 Dec 16 14:14:58.398269 kernel: audit: type=1327 audit(1765894498.357:450): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961303163613537313232363936613930626431346338393666653638 Dec 16 14:14:58.357000 audit: BPF prog-id=141 op=LOAD Dec 16 14:14:58.402202 kernel: audit: type=1334 audit(1765894498.357:451): prog-id=141 op=LOAD Dec 16 14:14:58.357000 audit[3045]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3034 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:58.409399 kernel: audit: type=1300 audit(1765894498.357:451): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3034 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:58.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961303163613537313232363936613930626431346338393666653638 Dec 16 14:14:58.411171 systemd[1]: Created slice kubepods-besteffort-podf3839bcf_2031_4003_9ba6_b10201529331.slice - libcontainer container kubepods-besteffort-podf3839bcf_2031_4003_9ba6_b10201529331.slice. Dec 16 14:14:58.418456 kernel: audit: type=1327 audit(1765894498.357:451): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961303163613537313232363936613930626431346338393666653638 Dec 16 14:14:58.357000 audit: BPF prog-id=142 op=LOAD Dec 16 14:14:58.357000 audit[3045]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3034 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:58.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961303163613537313232363936613930626431346338393666653638 Dec 16 14:14:58.357000 audit: BPF prog-id=142 op=UNLOAD Dec 16 14:14:58.357000 audit[3045]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3034 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:58.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961303163613537313232363936613930626431346338393666653638 Dec 16 14:14:58.357000 audit: BPF prog-id=141 op=UNLOAD Dec 16 14:14:58.357000 audit[3045]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3034 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:58.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961303163613537313232363936613930626431346338393666653638 Dec 16 14:14:58.357000 audit: BPF prog-id=143 op=LOAD Dec 16 14:14:58.357000 audit[3045]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3034 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:58.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961303163613537313232363936613930626431346338393666653638 Dec 16 14:14:58.439956 containerd[1683]: time="2025-12-16T14:14:58.438515918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9qsmk,Uid:c52138ef-ea24-40db-a6fe-54c53dbb7374,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a01ca57122696a90bd14c896fe682c525f9a26942013bd2682fdbc9760a17ba\"" Dec 16 14:14:58.440159 kubelet[2975]: I1216 14:14:58.439769 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bphk8\" (UniqueName: \"kubernetes.io/projected/f3839bcf-2031-4003-9ba6-b10201529331-kube-api-access-bphk8\") pod \"tigera-operator-7dcd859c48-h62j6\" (UID: \"f3839bcf-2031-4003-9ba6-b10201529331\") " pod="tigera-operator/tigera-operator-7dcd859c48-h62j6" Dec 16 14:14:58.440159 kubelet[2975]: I1216 14:14:58.439844 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f3839bcf-2031-4003-9ba6-b10201529331-var-lib-calico\") pod \"tigera-operator-7dcd859c48-h62j6\" (UID: \"f3839bcf-2031-4003-9ba6-b10201529331\") " pod="tigera-operator/tigera-operator-7dcd859c48-h62j6" Dec 16 14:14:58.450340 containerd[1683]: time="2025-12-16T14:14:58.450239759Z" level=info msg="CreateContainer within sandbox \"9a01ca57122696a90bd14c896fe682c525f9a26942013bd2682fdbc9760a17ba\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 14:14:58.478865 containerd[1683]: time="2025-12-16T14:14:58.476621568Z" level=info msg="Container 9b03f44cac93d43cb345c7c122b4f8063c25bd03b9f34c399db1a3d1d8a36515: CDI devices from CRI Config.CDIDevices: []" Dec 16 14:14:58.479433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2347475091.mount: Deactivated successfully. Dec 16 14:14:58.489952 containerd[1683]: time="2025-12-16T14:14:58.489905660Z" level=info msg="CreateContainer within sandbox \"9a01ca57122696a90bd14c896fe682c525f9a26942013bd2682fdbc9760a17ba\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9b03f44cac93d43cb345c7c122b4f8063c25bd03b9f34c399db1a3d1d8a36515\"" Dec 16 14:14:58.491113 containerd[1683]: time="2025-12-16T14:14:58.491070854Z" level=info msg="StartContainer for \"9b03f44cac93d43cb345c7c122b4f8063c25bd03b9f34c399db1a3d1d8a36515\"" Dec 16 14:14:58.493831 containerd[1683]: time="2025-12-16T14:14:58.493739830Z" level=info msg="connecting to shim 9b03f44cac93d43cb345c7c122b4f8063c25bd03b9f34c399db1a3d1d8a36515" address="unix:///run/containerd/s/cf03772ff231f536667b35695b9f0cad8fe661426a45711af1c7fe29b1aed36c" protocol=ttrpc version=3 Dec 16 14:14:58.539595 systemd[1]: Started cri-containerd-9b03f44cac93d43cb345c7c122b4f8063c25bd03b9f34c399db1a3d1d8a36515.scope - libcontainer container 9b03f44cac93d43cb345c7c122b4f8063c25bd03b9f34c399db1a3d1d8a36515. Dec 16 14:14:58.621000 audit: BPF prog-id=144 op=LOAD Dec 16 14:14:58.621000 audit[3073]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3034 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:58.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962303366343463616339336434336362333435633763313232623466 Dec 16 14:14:58.621000 audit: BPF prog-id=145 op=LOAD Dec 16 14:14:58.621000 audit[3073]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3034 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:58.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962303366343463616339336434336362333435633763313232623466 Dec 16 14:14:58.625000 audit: BPF prog-id=145 op=UNLOAD Dec 16 14:14:58.625000 audit[3073]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3034 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:58.625000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962303366343463616339336434336362333435633763313232623466 Dec 16 14:14:58.625000 audit: BPF prog-id=144 op=UNLOAD Dec 16 14:14:58.625000 audit[3073]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3034 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:58.625000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962303366343463616339336434336362333435633763313232623466 Dec 16 14:14:58.625000 audit: BPF prog-id=146 op=LOAD Dec 16 14:14:58.625000 audit[3073]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3034 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:58.625000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962303366343463616339336434336362333435633763313232623466 Dec 16 14:14:58.677724 containerd[1683]: time="2025-12-16T14:14:58.677556648Z" level=info msg="StartContainer for \"9b03f44cac93d43cb345c7c122b4f8063c25bd03b9f34c399db1a3d1d8a36515\" returns successfully" Dec 16 14:14:58.726939 containerd[1683]: time="2025-12-16T14:14:58.726869054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-h62j6,Uid:f3839bcf-2031-4003-9ba6-b10201529331,Namespace:tigera-operator,Attempt:0,}" Dec 16 14:14:58.754201 containerd[1683]: time="2025-12-16T14:14:58.753878472Z" level=info msg="connecting to shim 7370ceb6468bdafbb6d12521e4277c838491355e21a02f087a32e67014641808" address="unix:///run/containerd/s/33c117ae1f8528cc1f9a4985db39a46adc2ca492d3bdea6c7fa45cb56a608e3e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 14:14:58.798502 systemd[1]: Started cri-containerd-7370ceb6468bdafbb6d12521e4277c838491355e21a02f087a32e67014641808.scope - libcontainer container 7370ceb6468bdafbb6d12521e4277c838491355e21a02f087a32e67014641808. Dec 16 14:14:58.819000 audit: BPF prog-id=147 op=LOAD Dec 16 14:14:58.820000 audit: BPF prog-id=148 op=LOAD Dec 16 14:14:58.820000 audit[3124]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3113 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:58.820000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733373063656236343638626461666262366431323532316534323737 Dec 16 14:14:58.820000 audit: BPF prog-id=148 op=UNLOAD Dec 16 14:14:58.820000 audit[3124]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3113 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:58.820000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733373063656236343638626461666262366431323532316534323737 Dec 16 14:14:58.821000 audit: BPF prog-id=149 op=LOAD Dec 16 14:14:58.821000 audit[3124]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3113 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:58.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733373063656236343638626461666262366431323532316534323737 Dec 16 14:14:58.821000 audit: BPF prog-id=150 op=LOAD Dec 16 14:14:58.821000 audit[3124]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3113 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:58.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733373063656236343638626461666262366431323532316534323737 Dec 16 14:14:58.821000 audit: BPF prog-id=150 op=UNLOAD Dec 16 14:14:58.821000 audit[3124]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3113 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:58.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733373063656236343638626461666262366431323532316534323737 Dec 16 14:14:58.822000 audit: BPF prog-id=149 op=UNLOAD Dec 16 14:14:58.822000 audit[3124]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3113 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:58.822000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733373063656236343638626461666262366431323532316534323737 Dec 16 14:14:58.822000 audit: BPF prog-id=151 op=LOAD Dec 16 14:14:58.822000 audit[3124]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3113 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:58.822000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733373063656236343638626461666262366431323532316534323737 Dec 16 14:14:58.879341 containerd[1683]: time="2025-12-16T14:14:58.879241988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-h62j6,Uid:f3839bcf-2031-4003-9ba6-b10201529331,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7370ceb6468bdafbb6d12521e4277c838491355e21a02f087a32e67014641808\"" Dec 16 14:14:58.884013 containerd[1683]: time="2025-12-16T14:14:58.883787426Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 14:14:59.188000 audit[3184]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3184 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:59.188000 audit[3184]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffde29da850 a2=0 a3=7ffde29da83c items=0 ppid=3086 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.188000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 14:14:59.191000 audit[3185]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3185 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:59.191000 audit[3185]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff63cbb650 a2=0 a3=7fff63cbb63c items=0 ppid=3086 pid=3185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.191000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 14:14:59.193000 audit[3188]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=3188 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:59.193000 audit[3188]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff01a80b70 a2=0 a3=7fff01a80b5c items=0 ppid=3086 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.193000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 14:14:59.194000 audit[3187]: NETFILTER_CFG table=nat:57 family=2 entries=1 op=nft_register_chain pid=3187 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:59.194000 audit[3187]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc28b78e10 a2=0 a3=7ffc28b78dfc items=0 ppid=3086 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.194000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 14:14:59.195000 audit[3189]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=3189 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:59.195000 audit[3189]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd3bc6eaa0 a2=0 a3=7ffd3bc6ea8c items=0 ppid=3086 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.195000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 14:14:59.197000 audit[3190]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3190 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:59.197000 audit[3190]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe2238f0a0 a2=0 a3=7ffe2238f08c items=0 ppid=3086 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.197000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 14:14:59.311000 audit[3193]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3193 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:59.311000 audit[3193]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffdde487520 a2=0 a3=7ffdde48750c items=0 ppid=3086 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.311000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 14:14:59.317000 audit[3195]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3195 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:59.317000 audit[3195]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd9942b6e0 a2=0 a3=7ffd9942b6cc items=0 ppid=3086 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.317000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 16 14:14:59.324000 audit[3198]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3198 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:59.324000 audit[3198]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd610bb3d0 a2=0 a3=7ffd610bb3bc items=0 ppid=3086 pid=3198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.324000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 16 14:14:59.326000 audit[3199]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3199 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:59.326000 audit[3199]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcbe7ff360 a2=0 a3=7ffcbe7ff34c items=0 ppid=3086 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.326000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 14:14:59.331000 audit[3201]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3201 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:59.331000 audit[3201]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc120ae1e0 a2=0 a3=7ffc120ae1cc items=0 ppid=3086 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.331000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 14:14:59.333000 audit[3202]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3202 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:59.333000 audit[3202]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd46918e0 a2=0 a3=7fffd46918cc items=0 ppid=3086 pid=3202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.333000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 14:14:59.337000 audit[3204]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3204 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:59.337000 audit[3204]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffed4505f0 a2=0 a3=7fffed4505dc items=0 ppid=3086 pid=3204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.337000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 14:14:59.342000 audit[3207]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3207 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:59.342000 audit[3207]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff0fb0a420 a2=0 a3=7fff0fb0a40c items=0 ppid=3086 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.342000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 16 14:14:59.345000 audit[3208]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3208 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:59.345000 audit[3208]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcdb6ca3a0 a2=0 a3=7ffcdb6ca38c items=0 ppid=3086 pid=3208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.345000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 14:14:59.349000 audit[3210]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3210 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:59.349000 audit[3210]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffef3a2ca10 a2=0 a3=7ffef3a2c9fc items=0 ppid=3086 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.349000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 14:14:59.350000 audit[3211]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3211 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:59.350000 audit[3211]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff760c4d40 a2=0 a3=7fff760c4d2c items=0 ppid=3086 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.350000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 14:14:59.354000 audit[3213]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3213 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:59.354000 audit[3213]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff420a4cb0 a2=0 a3=7fff420a4c9c items=0 ppid=3086 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.354000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 14:14:59.360000 audit[3216]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3216 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:59.360000 audit[3216]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe651e4580 a2=0 a3=7ffe651e456c items=0 ppid=3086 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.360000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 14:14:59.366000 audit[3219]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3219 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:59.366000 audit[3219]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff0bfe71e0 a2=0 a3=7fff0bfe71cc items=0 ppid=3086 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.366000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 14:14:59.368000 audit[3220]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3220 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:59.368000 audit[3220]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc6359a5f0 a2=0 a3=7ffc6359a5dc items=0 ppid=3086 pid=3220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.368000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 14:14:59.372000 audit[3222]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3222 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:59.372000 audit[3222]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe94ddf2d0 a2=0 a3=7ffe94ddf2bc items=0 ppid=3086 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.372000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 14:14:59.378000 audit[3225]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3225 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:59.378000 audit[3225]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe19aa0ed0 a2=0 a3=7ffe19aa0ebc items=0 ppid=3086 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.378000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 14:14:59.380000 audit[3226]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3226 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:59.380000 audit[3226]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff8fdd6b10 a2=0 a3=7fff8fdd6afc items=0 ppid=3086 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.380000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 14:14:59.384000 audit[3228]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3228 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 14:14:59.384000 audit[3228]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc3d203da0 a2=0 a3=7ffc3d203d8c items=0 ppid=3086 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.384000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 14:14:59.416000 audit[3234]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3234 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:14:59.416000 audit[3234]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff4d196120 a2=0 a3=7fff4d19610c items=0 ppid=3086 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.416000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:14:59.426000 audit[3234]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3234 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:14:59.426000 audit[3234]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7fff4d196120 a2=0 a3=7fff4d19610c items=0 ppid=3086 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.426000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:14:59.428000 audit[3239]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3239 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:59.428000 audit[3239]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffaa47d540 a2=0 a3=7fffaa47d52c items=0 ppid=3086 pid=3239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.428000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 14:14:59.433000 audit[3241]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3241 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:59.433000 audit[3241]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd35ba7320 a2=0 a3=7ffd35ba730c items=0 ppid=3086 pid=3241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.433000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 16 14:14:59.438000 audit[3244]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3244 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:59.438000 audit[3244]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe487e3600 a2=0 a3=7ffe487e35ec items=0 ppid=3086 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.438000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 16 14:14:59.441000 audit[3245]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3245 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:59.441000 audit[3245]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcf1807030 a2=0 a3=7ffcf180701c items=0 ppid=3086 pid=3245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.441000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 14:14:59.446000 audit[3247]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3247 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:59.446000 audit[3247]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffffd469d20 a2=0 a3=7ffffd469d0c items=0 ppid=3086 pid=3247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.446000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 14:14:59.448000 audit[3248]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3248 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:59.448000 audit[3248]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd332cda0 a2=0 a3=7ffcd332cd8c items=0 ppid=3086 pid=3248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.448000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 14:14:59.452000 audit[3250]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3250 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:59.452000 audit[3250]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff9b67b600 a2=0 a3=7fff9b67b5ec items=0 ppid=3086 pid=3250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.452000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 16 14:14:59.458000 audit[3253]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3253 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:59.458000 audit[3253]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc6e5ff570 a2=0 a3=7ffc6e5ff55c items=0 ppid=3086 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.458000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 14:14:59.460000 audit[3254]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3254 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:59.460000 audit[3254]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdbb96fe70 a2=0 a3=7ffdbb96fe5c items=0 ppid=3086 pid=3254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.460000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 14:14:59.464000 audit[3256]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3256 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:59.464000 audit[3256]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff18450e90 a2=0 a3=7fff18450e7c items=0 ppid=3086 pid=3256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.464000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 14:14:59.466000 audit[3257]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3257 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:59.466000 audit[3257]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd74fcd0c0 a2=0 a3=7ffd74fcd0ac items=0 ppid=3086 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.466000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 14:14:59.470000 audit[3259]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3259 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:59.470000 audit[3259]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff3f1c9f60 a2=0 a3=7fff3f1c9f4c items=0 ppid=3086 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.470000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 14:14:59.476000 audit[3262]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3262 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:59.476000 audit[3262]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc09dbaf20 a2=0 a3=7ffc09dbaf0c items=0 ppid=3086 pid=3262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.476000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 14:14:59.484000 audit[3265]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3265 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:59.484000 audit[3265]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdbb885a20 a2=0 a3=7ffdbb885a0c items=0 ppid=3086 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.484000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 16 14:14:59.486000 audit[3266]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3266 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:59.486000 audit[3266]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd0b61b810 a2=0 a3=7ffd0b61b7fc items=0 ppid=3086 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.486000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 14:14:59.491000 audit[3268]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3268 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:59.491000 audit[3268]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe67d86a30 a2=0 a3=7ffe67d86a1c items=0 ppid=3086 pid=3268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.491000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 14:14:59.498000 audit[3271]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3271 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:59.498000 audit[3271]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff2ed63bb0 a2=0 a3=7fff2ed63b9c items=0 ppid=3086 pid=3271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.498000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 14:14:59.500000 audit[3272]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3272 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:59.500000 audit[3272]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc80e68590 a2=0 a3=7ffc80e6857c items=0 ppid=3086 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.500000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 14:14:59.505000 audit[3274]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3274 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:59.505000 audit[3274]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffe773a48d0 a2=0 a3=7ffe773a48bc items=0 ppid=3086 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.505000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 14:14:59.507000 audit[3275]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3275 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:59.507000 audit[3275]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd24129ec0 a2=0 a3=7ffd24129eac items=0 ppid=3086 pid=3275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.507000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 14:14:59.511000 audit[3277]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3277 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:59.511000 audit[3277]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd58e2a330 a2=0 a3=7ffd58e2a31c items=0 ppid=3086 pid=3277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.511000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 14:14:59.520000 audit[3280]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3280 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 14:14:59.520000 audit[3280]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcb858f930 a2=0 a3=7ffcb858f91c items=0 ppid=3086 pid=3280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.520000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 14:14:59.526000 audit[3282]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3282 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 14:14:59.526000 audit[3282]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffc7e13fb80 a2=0 a3=7ffc7e13fb6c items=0 ppid=3086 pid=3282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.526000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:14:59.528000 audit[3282]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3282 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 14:14:59.528000 audit[3282]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffc7e13fb80 a2=0 a3=7ffc7e13fb6c items=0 ppid=3086 pid=3282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:14:59.528000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:14:59.662161 kubelet[2975]: I1216 14:14:59.662067 2975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9qsmk" podStartSLOduration=2.662037209 podStartE2EDuration="2.662037209s" podCreationTimestamp="2025-12-16 14:14:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 14:14:59.645745364 +0000 UTC m=+8.417387188" watchObservedRunningTime="2025-12-16 14:14:59.662037209 +0000 UTC m=+8.433679012" Dec 16 14:15:02.785354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount547747216.mount: Deactivated successfully. Dec 16 14:15:04.175991 containerd[1683]: time="2025-12-16T14:15:04.174872630Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Dec 16 14:15:04.178225 containerd[1683]: time="2025-12-16T14:15:04.177831630Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 5.293866069s" Dec 16 14:15:04.178225 containerd[1683]: time="2025-12-16T14:15:04.177871805Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 16 14:15:04.189234 containerd[1683]: time="2025-12-16T14:15:04.188786702Z" level=info msg="CreateContainer within sandbox \"7370ceb6468bdafbb6d12521e4277c838491355e21a02f087a32e67014641808\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 14:15:04.198317 containerd[1683]: time="2025-12-16T14:15:04.198288965Z" level=info msg="Container fd7c2e314665d197d4bf76c414abce7f583451b0b2465e6fea86d1858699354a: CDI devices from CRI Config.CDIDevices: []" Dec 16 14:15:04.214735 containerd[1683]: time="2025-12-16T14:15:04.214654525Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:15:04.215814 containerd[1683]: time="2025-12-16T14:15:04.215783222Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:15:04.216712 containerd[1683]: time="2025-12-16T14:15:04.216681614Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:15:04.218624 containerd[1683]: time="2025-12-16T14:15:04.218570501Z" level=info msg="CreateContainer within sandbox \"7370ceb6468bdafbb6d12521e4277c838491355e21a02f087a32e67014641808\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"fd7c2e314665d197d4bf76c414abce7f583451b0b2465e6fea86d1858699354a\"" Dec 16 14:15:04.221888 containerd[1683]: time="2025-12-16T14:15:04.221732112Z" level=info msg="StartContainer for \"fd7c2e314665d197d4bf76c414abce7f583451b0b2465e6fea86d1858699354a\"" Dec 16 14:15:04.223790 containerd[1683]: time="2025-12-16T14:15:04.223662524Z" level=info msg="connecting to shim fd7c2e314665d197d4bf76c414abce7f583451b0b2465e6fea86d1858699354a" address="unix:///run/containerd/s/33c117ae1f8528cc1f9a4985db39a46adc2ca492d3bdea6c7fa45cb56a608e3e" protocol=ttrpc version=3 Dec 16 14:15:04.263545 systemd[1]: Started cri-containerd-fd7c2e314665d197d4bf76c414abce7f583451b0b2465e6fea86d1858699354a.scope - libcontainer container fd7c2e314665d197d4bf76c414abce7f583451b0b2465e6fea86d1858699354a. Dec 16 14:15:04.292308 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 16 14:15:04.292432 kernel: audit: type=1334 audit(1765894504.286:520): prog-id=152 op=LOAD Dec 16 14:15:04.286000 audit: BPF prog-id=152 op=LOAD Dec 16 14:15:04.293000 audit: BPF prog-id=153 op=LOAD Dec 16 14:15:04.294946 kernel: audit: type=1334 audit(1765894504.293:521): prog-id=153 op=LOAD Dec 16 14:15:04.295029 kernel: audit: type=1300 audit(1765894504.293:521): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3113 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:04.293000 audit[3294]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3113 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:04.300237 kernel: audit: type=1327 audit(1765894504.293:521): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664376332653331343636356431393764346266373663343134616263 Dec 16 14:15:04.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664376332653331343636356431393764346266373663343134616263 Dec 16 14:15:04.293000 audit: BPF prog-id=153 op=UNLOAD Dec 16 14:15:04.293000 audit[3294]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3113 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:04.308641 kernel: audit: type=1334 audit(1765894504.293:522): prog-id=153 op=UNLOAD Dec 16 14:15:04.308724 kernel: audit: type=1300 audit(1765894504.293:522): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3113 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:04.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664376332653331343636356431393764346266373663343134616263 Dec 16 14:15:04.313551 kernel: audit: type=1327 audit(1765894504.293:522): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664376332653331343636356431393764346266373663343134616263 Dec 16 14:15:04.293000 audit: BPF prog-id=154 op=LOAD Dec 16 14:15:04.317612 kernel: audit: type=1334 audit(1765894504.293:523): prog-id=154 op=LOAD Dec 16 14:15:04.293000 audit[3294]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3113 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:04.320467 kernel: audit: type=1300 audit(1765894504.293:523): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3113 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:04.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664376332653331343636356431393764346266373663343134616263 Dec 16 14:15:04.332260 kernel: audit: type=1327 audit(1765894504.293:523): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664376332653331343636356431393764346266373663343134616263 Dec 16 14:15:04.293000 audit: BPF prog-id=155 op=LOAD Dec 16 14:15:04.293000 audit[3294]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3113 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:04.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664376332653331343636356431393764346266373663343134616263 Dec 16 14:15:04.293000 audit: BPF prog-id=155 op=UNLOAD Dec 16 14:15:04.293000 audit[3294]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3113 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:04.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664376332653331343636356431393764346266373663343134616263 Dec 16 14:15:04.293000 audit: BPF prog-id=154 op=UNLOAD Dec 16 14:15:04.293000 audit[3294]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3113 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:04.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664376332653331343636356431393764346266373663343134616263 Dec 16 14:15:04.293000 audit: BPF prog-id=156 op=LOAD Dec 16 14:15:04.293000 audit[3294]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3113 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:04.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664376332653331343636356431393764346266373663343134616263 Dec 16 14:15:04.390032 containerd[1683]: time="2025-12-16T14:15:04.389708162Z" level=info msg="StartContainer for \"fd7c2e314665d197d4bf76c414abce7f583451b0b2465e6fea86d1858699354a\" returns successfully" Dec 16 14:15:04.660538 kubelet[2975]: I1216 14:15:04.660448 2975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-h62j6" podStartSLOduration=1.362592121 podStartE2EDuration="6.660403847s" podCreationTimestamp="2025-12-16 14:14:58 +0000 UTC" firstStartedPulling="2025-12-16 14:14:58.882779888 +0000 UTC m=+7.654421683" lastFinishedPulling="2025-12-16 14:15:04.180591618 +0000 UTC m=+12.952233409" observedRunningTime="2025-12-16 14:15:04.6597321 +0000 UTC m=+13.431373907" watchObservedRunningTime="2025-12-16 14:15:04.660403847 +0000 UTC m=+13.432045648" Dec 16 14:15:12.112643 sudo[1972]: pam_unix(sudo:session): session closed for user root Dec 16 14:15:12.118807 kernel: kauditd_printk_skb: 12 callbacks suppressed Dec 16 14:15:12.118913 kernel: audit: type=1106 audit(1765894512.111:528): pid=1972 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 14:15:12.111000 audit[1972]: USER_END pid=1972 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 14:15:12.111000 audit[1972]: CRED_DISP pid=1972 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 14:15:12.132235 kernel: audit: type=1104 audit(1765894512.111:529): pid=1972 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 14:15:12.269990 sshd[1960]: Connection closed by 139.178.89.65 port 35900 Dec 16 14:15:12.273055 sshd-session[1954]: pam_unix(sshd:session): session closed for user core Dec 16 14:15:12.274000 audit[1954]: USER_END pid=1954 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:15:12.283236 kernel: audit: type=1106 audit(1765894512.274:530): pid=1954 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:15:12.285645 systemd[1]: sshd@8-10.230.52.194:22-139.178.89.65:35900.service: Deactivated successfully. Dec 16 14:15:12.275000 audit[1954]: CRED_DISP pid=1954 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:15:12.292510 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 14:15:12.293403 systemd[1]: session-11.scope: Consumed 7.558s CPU time, 152.1M memory peak. Dec 16 14:15:12.298661 systemd-logind[1646]: Session 11 logged out. Waiting for processes to exit. Dec 16 14:15:12.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.52.194:22-139.178.89.65:35900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:15:12.300401 kernel: audit: type=1104 audit(1765894512.275:531): pid=1954 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:15:12.300496 kernel: audit: type=1131 audit(1765894512.286:532): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.52.194:22-139.178.89.65:35900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:15:12.305245 systemd-logind[1646]: Removed session 11. Dec 16 14:15:12.601000 audit[3375]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3375 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:12.608365 kernel: audit: type=1325 audit(1765894512.601:533): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3375 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:12.601000 audit[3375]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffa2a247b0 a2=0 a3=7fffa2a2479c items=0 ppid=3086 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:12.616224 kernel: audit: type=1300 audit(1765894512.601:533): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffa2a247b0 a2=0 a3=7fffa2a2479c items=0 ppid=3086 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:12.601000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:15:12.620252 kernel: audit: type=1327 audit(1765894512.601:533): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:15:12.620000 audit[3375]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3375 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:12.625251 kernel: audit: type=1325 audit(1765894512.620:534): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3375 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:12.620000 audit[3375]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffa2a247b0 a2=0 a3=0 items=0 ppid=3086 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:12.632214 kernel: audit: type=1300 audit(1765894512.620:534): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffa2a247b0 a2=0 a3=0 items=0 ppid=3086 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:12.620000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:15:12.688000 audit[3377]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3377 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:12.688000 audit[3377]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe68418430 a2=0 a3=7ffe6841841c items=0 ppid=3086 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:12.688000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:15:12.693000 audit[3377]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3377 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:12.693000 audit[3377]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe68418430 a2=0 a3=0 items=0 ppid=3086 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:12.693000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:15:16.327000 audit[3381]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3381 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:16.327000 audit[3381]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff374ce860 a2=0 a3=7fff374ce84c items=0 ppid=3086 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:16.327000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:15:16.333000 audit[3381]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3381 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:16.333000 audit[3381]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff374ce860 a2=0 a3=0 items=0 ppid=3086 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:16.333000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:15:16.359000 audit[3383]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3383 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:16.359000 audit[3383]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc19370980 a2=0 a3=7ffc1937096c items=0 ppid=3086 pid=3383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:16.359000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:15:16.364000 audit[3383]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3383 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:16.364000 audit[3383]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc19370980 a2=0 a3=0 items=0 ppid=3086 pid=3383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:16.364000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:15:17.502000 audit[3385]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3385 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:17.505758 kernel: kauditd_printk_skb: 19 callbacks suppressed Dec 16 14:15:17.505841 kernel: audit: type=1325 audit(1765894517.502:541): table=filter:113 family=2 entries=19 op=nft_register_rule pid=3385 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:17.502000 audit[3385]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff1d9b2ae0 a2=0 a3=7fff1d9b2acc items=0 ppid=3086 pid=3385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:17.511277 kernel: audit: type=1300 audit(1765894517.502:541): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff1d9b2ae0 a2=0 a3=7fff1d9b2acc items=0 ppid=3086 pid=3385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:17.502000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:15:17.516089 kernel: audit: type=1327 audit(1765894517.502:541): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:15:17.516000 audit[3385]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3385 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:17.525207 kernel: audit: type=1325 audit(1765894517.516:542): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3385 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:17.516000 audit[3385]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff1d9b2ae0 a2=0 a3=0 items=0 ppid=3086 pid=3385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:17.532227 kernel: audit: type=1300 audit(1765894517.516:542): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff1d9b2ae0 a2=0 a3=0 items=0 ppid=3086 pid=3385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:17.516000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:15:17.538210 kernel: audit: type=1327 audit(1765894517.516:542): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:15:18.414000 audit[3387]: NETFILTER_CFG table=filter:115 family=2 entries=20 op=nft_register_rule pid=3387 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:18.419316 kernel: audit: type=1325 audit(1765894518.414:543): table=filter:115 family=2 entries=20 op=nft_register_rule pid=3387 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:18.414000 audit[3387]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe27aa2a90 a2=0 a3=7ffe27aa2a7c items=0 ppid=3086 pid=3387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:18.428078 kernel: audit: type=1300 audit(1765894518.414:543): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe27aa2a90 a2=0 a3=7ffe27aa2a7c items=0 ppid=3086 pid=3387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:18.414000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:15:18.436310 kernel: audit: type=1327 audit(1765894518.414:543): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:15:18.435000 audit[3387]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3387 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:18.440607 kernel: audit: type=1325 audit(1765894518.435:544): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3387 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:18.435000 audit[3387]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe27aa2a90 a2=0 a3=0 items=0 ppid=3086 pid=3387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:18.435000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:15:18.488995 systemd[1]: Created slice kubepods-besteffort-podf84b3754_a109_4273_9b02_8a8bd2351514.slice - libcontainer container kubepods-besteffort-podf84b3754_a109_4273_9b02_8a8bd2351514.slice. Dec 16 14:15:18.583972 kubelet[2975]: I1216 14:15:18.583641 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6h27\" (UniqueName: \"kubernetes.io/projected/f84b3754-a109-4273-9b02-8a8bd2351514-kube-api-access-n6h27\") pod \"calico-typha-b656d8c45-nzwpw\" (UID: \"f84b3754-a109-4273-9b02-8a8bd2351514\") " pod="calico-system/calico-typha-b656d8c45-nzwpw" Dec 16 14:15:18.583972 kubelet[2975]: I1216 14:15:18.583752 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f84b3754-a109-4273-9b02-8a8bd2351514-tigera-ca-bundle\") pod \"calico-typha-b656d8c45-nzwpw\" (UID: \"f84b3754-a109-4273-9b02-8a8bd2351514\") " pod="calico-system/calico-typha-b656d8c45-nzwpw" Dec 16 14:15:18.583972 kubelet[2975]: I1216 14:15:18.583798 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f84b3754-a109-4273-9b02-8a8bd2351514-typha-certs\") pod \"calico-typha-b656d8c45-nzwpw\" (UID: \"f84b3754-a109-4273-9b02-8a8bd2351514\") " pod="calico-system/calico-typha-b656d8c45-nzwpw" Dec 16 14:15:18.646022 systemd[1]: Created slice kubepods-besteffort-pod3ba2b616_70a2_46e6_b98c_469d6ea44222.slice - libcontainer container kubepods-besteffort-pod3ba2b616_70a2_46e6_b98c_469d6ea44222.slice. Dec 16 14:15:18.684816 kubelet[2975]: I1216 14:15:18.684577 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ba2b616-70a2-46e6-b98c-469d6ea44222-lib-modules\") pod \"calico-node-nhh62\" (UID: \"3ba2b616-70a2-46e6-b98c-469d6ea44222\") " pod="calico-system/calico-node-nhh62" Dec 16 14:15:18.686421 kubelet[2975]: I1216 14:15:18.685672 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3ba2b616-70a2-46e6-b98c-469d6ea44222-var-lib-calico\") pod \"calico-node-nhh62\" (UID: \"3ba2b616-70a2-46e6-b98c-469d6ea44222\") " pod="calico-system/calico-node-nhh62" Dec 16 14:15:18.686421 kubelet[2975]: I1216 14:15:18.685813 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3ba2b616-70a2-46e6-b98c-469d6ea44222-cni-log-dir\") pod \"calico-node-nhh62\" (UID: \"3ba2b616-70a2-46e6-b98c-469d6ea44222\") " pod="calico-system/calico-node-nhh62" Dec 16 14:15:18.686421 kubelet[2975]: I1216 14:15:18.685857 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ba2b616-70a2-46e6-b98c-469d6ea44222-tigera-ca-bundle\") pod \"calico-node-nhh62\" (UID: \"3ba2b616-70a2-46e6-b98c-469d6ea44222\") " pod="calico-system/calico-node-nhh62" Dec 16 14:15:18.686421 kubelet[2975]: I1216 14:15:18.685884 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3ba2b616-70a2-46e6-b98c-469d6ea44222-var-run-calico\") pod \"calico-node-nhh62\" (UID: \"3ba2b616-70a2-46e6-b98c-469d6ea44222\") " pod="calico-system/calico-node-nhh62" Dec 16 14:15:18.686421 kubelet[2975]: I1216 14:15:18.685925 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3ba2b616-70a2-46e6-b98c-469d6ea44222-flexvol-driver-host\") pod \"calico-node-nhh62\" (UID: \"3ba2b616-70a2-46e6-b98c-469d6ea44222\") " pod="calico-system/calico-node-nhh62" Dec 16 14:15:18.686864 kubelet[2975]: I1216 14:15:18.685963 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3ba2b616-70a2-46e6-b98c-469d6ea44222-node-certs\") pod \"calico-node-nhh62\" (UID: \"3ba2b616-70a2-46e6-b98c-469d6ea44222\") " pod="calico-system/calico-node-nhh62" Dec 16 14:15:18.686864 kubelet[2975]: I1216 14:15:18.685992 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3ba2b616-70a2-46e6-b98c-469d6ea44222-cni-bin-dir\") pod \"calico-node-nhh62\" (UID: \"3ba2b616-70a2-46e6-b98c-469d6ea44222\") " pod="calico-system/calico-node-nhh62" Dec 16 14:15:18.686864 kubelet[2975]: I1216 14:15:18.686037 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3ba2b616-70a2-46e6-b98c-469d6ea44222-cni-net-dir\") pod \"calico-node-nhh62\" (UID: \"3ba2b616-70a2-46e6-b98c-469d6ea44222\") " pod="calico-system/calico-node-nhh62" Dec 16 14:15:18.686864 kubelet[2975]: I1216 14:15:18.686064 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ba2b616-70a2-46e6-b98c-469d6ea44222-xtables-lock\") pod \"calico-node-nhh62\" (UID: \"3ba2b616-70a2-46e6-b98c-469d6ea44222\") " pod="calico-system/calico-node-nhh62" Dec 16 14:15:18.686864 kubelet[2975]: I1216 14:15:18.686091 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzzls\" (UniqueName: \"kubernetes.io/projected/3ba2b616-70a2-46e6-b98c-469d6ea44222-kube-api-access-pzzls\") pod \"calico-node-nhh62\" (UID: \"3ba2b616-70a2-46e6-b98c-469d6ea44222\") " pod="calico-system/calico-node-nhh62" Dec 16 14:15:18.687198 kubelet[2975]: I1216 14:15:18.686134 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3ba2b616-70a2-46e6-b98c-469d6ea44222-policysync\") pod \"calico-node-nhh62\" (UID: \"3ba2b616-70a2-46e6-b98c-469d6ea44222\") " pod="calico-system/calico-node-nhh62" Dec 16 14:15:18.766565 kubelet[2975]: E1216 14:15:18.766166 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tfzg7" podUID="1248d2d9-77a6-4a9d-9b93-4af871a2edbf" Dec 16 14:15:18.797378 kubelet[2975]: E1216 14:15:18.797242 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.797378 kubelet[2975]: W1216 14:15:18.797321 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.798458 containerd[1683]: time="2025-12-16T14:15:18.797851174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b656d8c45-nzwpw,Uid:f84b3754-a109-4273-9b02-8a8bd2351514,Namespace:calico-system,Attempt:0,}" Dec 16 14:15:18.805676 kubelet[2975]: E1216 14:15:18.805275 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.806119 kubelet[2975]: E1216 14:15:18.806097 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.806835 kubelet[2975]: W1216 14:15:18.806273 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.806835 kubelet[2975]: E1216 14:15:18.806303 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.831229 kubelet[2975]: E1216 14:15:18.831121 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.831229 kubelet[2975]: W1216 14:15:18.831155 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.831229 kubelet[2975]: E1216 14:15:18.831210 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.858125 kubelet[2975]: E1216 14:15:18.855414 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.858125 kubelet[2975]: W1216 14:15:18.855450 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.858125 kubelet[2975]: E1216 14:15:18.855485 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.858125 kubelet[2975]: E1216 14:15:18.855790 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.858125 kubelet[2975]: W1216 14:15:18.855804 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.858125 kubelet[2975]: E1216 14:15:18.855819 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.858125 kubelet[2975]: E1216 14:15:18.856056 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.858125 kubelet[2975]: W1216 14:15:18.856070 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.858125 kubelet[2975]: E1216 14:15:18.856084 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.858125 kubelet[2975]: E1216 14:15:18.856455 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.858639 kubelet[2975]: W1216 14:15:18.856469 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.858639 kubelet[2975]: E1216 14:15:18.856483 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.858639 kubelet[2975]: E1216 14:15:18.856747 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.858639 kubelet[2975]: W1216 14:15:18.856760 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.858639 kubelet[2975]: E1216 14:15:18.856790 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.858639 kubelet[2975]: E1216 14:15:18.857012 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.858639 kubelet[2975]: W1216 14:15:18.857025 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.858639 kubelet[2975]: E1216 14:15:18.857047 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.858639 kubelet[2975]: E1216 14:15:18.857953 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.858639 kubelet[2975]: W1216 14:15:18.857967 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.860357 kubelet[2975]: E1216 14:15:18.857982 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.860357 kubelet[2975]: E1216 14:15:18.860035 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.860357 kubelet[2975]: W1216 14:15:18.860062 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.860357 kubelet[2975]: E1216 14:15:18.860076 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.862352 kubelet[2975]: E1216 14:15:18.862255 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.862352 kubelet[2975]: W1216 14:15:18.862276 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.862352 kubelet[2975]: E1216 14:15:18.862292 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.862753 kubelet[2975]: E1216 14:15:18.862734 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.862934 kubelet[2975]: W1216 14:15:18.862845 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.862934 kubelet[2975]: E1216 14:15:18.862873 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.863402 kubelet[2975]: E1216 14:15:18.863281 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.863402 kubelet[2975]: W1216 14:15:18.863311 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.863402 kubelet[2975]: E1216 14:15:18.863331 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.863986 kubelet[2975]: E1216 14:15:18.863884 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.863986 kubelet[2975]: W1216 14:15:18.863903 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.863986 kubelet[2975]: E1216 14:15:18.863918 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.864499 kubelet[2975]: E1216 14:15:18.864412 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.864499 kubelet[2975]: W1216 14:15:18.864430 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.864499 kubelet[2975]: E1216 14:15:18.864445 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.864889 kubelet[2975]: E1216 14:15:18.864870 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.865045 kubelet[2975]: W1216 14:15:18.864965 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.865045 kubelet[2975]: E1216 14:15:18.864989 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.865508 kubelet[2975]: E1216 14:15:18.865415 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.865508 kubelet[2975]: W1216 14:15:18.865432 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.865508 kubelet[2975]: E1216 14:15:18.865446 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.866284 kubelet[2975]: E1216 14:15:18.866242 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.866592 kubelet[2975]: W1216 14:15:18.866404 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.866592 kubelet[2975]: E1216 14:15:18.866429 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.867133 kubelet[2975]: E1216 14:15:18.867043 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.867133 kubelet[2975]: W1216 14:15:18.867062 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.867133 kubelet[2975]: E1216 14:15:18.867077 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.867584 kubelet[2975]: E1216 14:15:18.867497 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.867584 kubelet[2975]: W1216 14:15:18.867514 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.867584 kubelet[2975]: E1216 14:15:18.867529 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.868093 kubelet[2975]: E1216 14:15:18.868002 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.868093 kubelet[2975]: W1216 14:15:18.868020 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.868093 kubelet[2975]: E1216 14:15:18.868034 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.868482 kubelet[2975]: E1216 14:15:18.868463 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.868681 kubelet[2975]: W1216 14:15:18.868564 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.868681 kubelet[2975]: E1216 14:15:18.868587 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.886126 containerd[1683]: time="2025-12-16T14:15:18.885885355Z" level=info msg="connecting to shim a79ec1014b54ce43df41d29c37f2cea231c54f95f6a01beddff8c779593c3e73" address="unix:///run/containerd/s/19528c4b0dea9c9b712f7c7e6a799d22b264ac966ca9237af73225f2147796da" namespace=k8s.io protocol=ttrpc version=3 Dec 16 14:15:18.888637 kubelet[2975]: E1216 14:15:18.888413 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.889267 kubelet[2975]: W1216 14:15:18.888908 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.889267 kubelet[2975]: E1216 14:15:18.888947 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.889267 kubelet[2975]: I1216 14:15:18.889025 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvzdp\" (UniqueName: \"kubernetes.io/projected/1248d2d9-77a6-4a9d-9b93-4af871a2edbf-kube-api-access-tvzdp\") pod \"csi-node-driver-tfzg7\" (UID: \"1248d2d9-77a6-4a9d-9b93-4af871a2edbf\") " pod="calico-system/csi-node-driver-tfzg7" Dec 16 14:15:18.889602 kubelet[2975]: E1216 14:15:18.889482 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.889602 kubelet[2975]: W1216 14:15:18.889522 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.889602 kubelet[2975]: E1216 14:15:18.889540 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.890051 kubelet[2975]: E1216 14:15:18.889925 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.890245 kubelet[2975]: W1216 14:15:18.890220 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.890393 kubelet[2975]: E1216 14:15:18.890247 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.892337 kubelet[2975]: E1216 14:15:18.892312 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.892337 kubelet[2975]: W1216 14:15:18.892334 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.892463 kubelet[2975]: E1216 14:15:18.892351 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.892463 kubelet[2975]: I1216 14:15:18.892413 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1248d2d9-77a6-4a9d-9b93-4af871a2edbf-varrun\") pod \"csi-node-driver-tfzg7\" (UID: \"1248d2d9-77a6-4a9d-9b93-4af871a2edbf\") " pod="calico-system/csi-node-driver-tfzg7" Dec 16 14:15:18.892747 kubelet[2975]: E1216 14:15:18.892715 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.892976 kubelet[2975]: W1216 14:15:18.892759 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.892976 kubelet[2975]: E1216 14:15:18.892788 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.892976 kubelet[2975]: I1216 14:15:18.892837 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1248d2d9-77a6-4a9d-9b93-4af871a2edbf-kubelet-dir\") pod \"csi-node-driver-tfzg7\" (UID: \"1248d2d9-77a6-4a9d-9b93-4af871a2edbf\") " pod="calico-system/csi-node-driver-tfzg7" Dec 16 14:15:18.893599 kubelet[2975]: E1216 14:15:18.893220 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.893599 kubelet[2975]: W1216 14:15:18.893242 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.893599 kubelet[2975]: E1216 14:15:18.893259 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.894058 kubelet[2975]: E1216 14:15:18.894038 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.894197 kubelet[2975]: W1216 14:15:18.894158 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.894304 kubelet[2975]: E1216 14:15:18.894284 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.895247 kubelet[2975]: E1216 14:15:18.895096 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.895247 kubelet[2975]: W1216 14:15:18.895115 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.895247 kubelet[2975]: E1216 14:15:18.895134 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.895247 kubelet[2975]: I1216 14:15:18.895169 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1248d2d9-77a6-4a9d-9b93-4af871a2edbf-socket-dir\") pod \"csi-node-driver-tfzg7\" (UID: \"1248d2d9-77a6-4a9d-9b93-4af871a2edbf\") " pod="calico-system/csi-node-driver-tfzg7" Dec 16 14:15:18.897034 kubelet[2975]: E1216 14:15:18.895890 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.897034 kubelet[2975]: W1216 14:15:18.895912 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.897034 kubelet[2975]: E1216 14:15:18.895958 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.897034 kubelet[2975]: E1216 14:15:18.896257 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.897034 kubelet[2975]: W1216 14:15:18.896590 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.897034 kubelet[2975]: E1216 14:15:18.896613 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.897393 kubelet[2975]: E1216 14:15:18.897371 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.897446 kubelet[2975]: W1216 14:15:18.897414 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.897446 kubelet[2975]: E1216 14:15:18.897433 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.897446 kubelet[2975]: I1216 14:15:18.897493 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1248d2d9-77a6-4a9d-9b93-4af871a2edbf-registration-dir\") pod \"csi-node-driver-tfzg7\" (UID: \"1248d2d9-77a6-4a9d-9b93-4af871a2edbf\") " pod="calico-system/csi-node-driver-tfzg7" Dec 16 14:15:18.898643 kubelet[2975]: E1216 14:15:18.897736 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.898643 kubelet[2975]: W1216 14:15:18.897752 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.898643 kubelet[2975]: E1216 14:15:18.897779 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.898643 kubelet[2975]: E1216 14:15:18.898069 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.898643 kubelet[2975]: W1216 14:15:18.898083 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.898643 kubelet[2975]: E1216 14:15:18.898098 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.898643 kubelet[2975]: E1216 14:15:18.898420 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.898643 kubelet[2975]: W1216 14:15:18.898434 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.898643 kubelet[2975]: E1216 14:15:18.898448 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.899292 kubelet[2975]: E1216 14:15:18.899272 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:18.899422 kubelet[2975]: W1216 14:15:18.899366 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:18.899422 kubelet[2975]: E1216 14:15:18.899394 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:18.958266 containerd[1683]: time="2025-12-16T14:15:18.957146256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nhh62,Uid:3ba2b616-70a2-46e6-b98c-469d6ea44222,Namespace:calico-system,Attempt:0,}" Dec 16 14:15:18.960617 systemd[1]: Started cri-containerd-a79ec1014b54ce43df41d29c37f2cea231c54f95f6a01beddff8c779593c3e73.scope - libcontainer container a79ec1014b54ce43df41d29c37f2cea231c54f95f6a01beddff8c779593c3e73. Dec 16 14:15:19.000109 kubelet[2975]: E1216 14:15:19.000011 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:19.000109 kubelet[2975]: W1216 14:15:19.000065 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:19.000109 kubelet[2975]: E1216 14:15:19.000113 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:19.001910 kubelet[2975]: E1216 14:15:19.000814 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:19.001910 kubelet[2975]: W1216 14:15:19.000834 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:19.001910 kubelet[2975]: E1216 14:15:19.000970 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:19.001910 kubelet[2975]: E1216 14:15:19.001710 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:19.001910 kubelet[2975]: W1216 14:15:19.001724 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:19.001910 kubelet[2975]: E1216 14:15:19.001739 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:19.003330 kubelet[2975]: E1216 14:15:19.002691 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:19.003330 kubelet[2975]: W1216 14:15:19.002712 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:19.003330 kubelet[2975]: E1216 14:15:19.002843 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:19.004294 kubelet[2975]: E1216 14:15:19.004232 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:19.004294 kubelet[2975]: W1216 14:15:19.004252 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:19.004294 kubelet[2975]: E1216 14:15:19.004268 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:19.006273 kubelet[2975]: E1216 14:15:19.006252 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:19.006739 kubelet[2975]: W1216 14:15:19.006377 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:19.006739 kubelet[2975]: E1216 14:15:19.006403 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:19.008132 kubelet[2975]: E1216 14:15:19.008078 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:19.008132 kubelet[2975]: W1216 14:15:19.008097 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:19.008132 kubelet[2975]: E1216 14:15:19.008113 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:19.010351 kubelet[2975]: E1216 14:15:19.010286 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:19.010351 kubelet[2975]: W1216 14:15:19.010305 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:19.010351 kubelet[2975]: E1216 14:15:19.010330 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:19.011002 kubelet[2975]: E1216 14:15:19.010941 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:19.011002 kubelet[2975]: W1216 14:15:19.010962 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:19.011002 kubelet[2975]: E1216 14:15:19.010981 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:19.013150 kubelet[2975]: E1216 14:15:19.013089 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:19.013150 kubelet[2975]: W1216 14:15:19.013111 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:19.013150 kubelet[2975]: E1216 14:15:19.013128 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:19.013701 kubelet[2975]: E1216 14:15:19.013647 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:19.013701 kubelet[2975]: W1216 14:15:19.013665 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:19.013701 kubelet[2975]: E1216 14:15:19.013680 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:19.016926 kubelet[2975]: E1216 14:15:19.016900 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:19.017342 kubelet[2975]: W1216 14:15:19.017221 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:19.017342 kubelet[2975]: E1216 14:15:19.017249 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:19.019508 kubelet[2975]: E1216 14:15:19.019445 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:19.019508 kubelet[2975]: W1216 14:15:19.019465 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:19.019508 kubelet[2975]: E1216 14:15:19.019482 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:19.021316 kubelet[2975]: E1216 14:15:19.021241 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:19.021316 kubelet[2975]: W1216 14:15:19.021269 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:19.021316 kubelet[2975]: E1216 14:15:19.021295 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:19.022610 kubelet[2975]: E1216 14:15:19.022544 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:19.022610 kubelet[2975]: W1216 14:15:19.022563 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:19.022610 kubelet[2975]: E1216 14:15:19.022579 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:19.023548 kubelet[2975]: E1216 14:15:19.023495 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:19.023548 kubelet[2975]: W1216 14:15:19.023513 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:19.023548 kubelet[2975]: E1216 14:15:19.023528 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:19.024728 kubelet[2975]: E1216 14:15:19.024639 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:19.024728 kubelet[2975]: W1216 14:15:19.024663 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:19.024728 kubelet[2975]: E1216 14:15:19.024678 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:19.025912 kubelet[2975]: E1216 14:15:19.025891 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:19.026217 kubelet[2975]: W1216 14:15:19.026045 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:19.027307 kubelet[2975]: E1216 14:15:19.026092 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:19.027758 kubelet[2975]: E1216 14:15:19.027699 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:19.027758 kubelet[2975]: W1216 14:15:19.027717 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:19.027758 kubelet[2975]: E1216 14:15:19.027734 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:19.028865 kubelet[2975]: E1216 14:15:19.028376 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:19.029113 kubelet[2975]: W1216 14:15:19.028920 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:19.029113 kubelet[2975]: E1216 14:15:19.028961 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:19.030489 kubelet[2975]: E1216 14:15:19.030461 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:19.030594 kubelet[2975]: W1216 14:15:19.030574 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:19.030812 kubelet[2975]: E1216 14:15:19.030668 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:19.032368 kubelet[2975]: E1216 14:15:19.032340 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:19.032619 kubelet[2975]: W1216 14:15:19.032475 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:19.032619 kubelet[2975]: E1216 14:15:19.032501 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:19.034281 kubelet[2975]: E1216 14:15:19.033562 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:19.034281 kubelet[2975]: W1216 14:15:19.033581 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:19.034281 kubelet[2975]: E1216 14:15:19.033601 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:19.036749 kubelet[2975]: E1216 14:15:19.036413 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:19.036749 kubelet[2975]: W1216 14:15:19.036433 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:19.037456 kubelet[2975]: E1216 14:15:19.037337 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:19.040227 kubelet[2975]: E1216 14:15:19.040164 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:19.040519 kubelet[2975]: W1216 14:15:19.040421 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:19.040519 kubelet[2975]: E1216 14:15:19.040442 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:19.058484 containerd[1683]: time="2025-12-16T14:15:19.058375965Z" level=info msg="connecting to shim dadb207e3569946b19681404e4b36ea88e7d16bd21e7604736deee9447fc477e" address="unix:///run/containerd/s/0fede2e378026f954fe5c6b71716da0b0db7ce1c86aca98ac7aa47a16263e756" namespace=k8s.io protocol=ttrpc version=3 Dec 16 14:15:19.120593 kubelet[2975]: E1216 14:15:19.120541 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:19.121335 kubelet[2975]: W1216 14:15:19.120571 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:19.121335 kubelet[2975]: E1216 14:15:19.121244 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:19.121443 systemd[1]: Started cri-containerd-dadb207e3569946b19681404e4b36ea88e7d16bd21e7604736deee9447fc477e.scope - libcontainer container dadb207e3569946b19681404e4b36ea88e7d16bd21e7604736deee9447fc477e. Dec 16 14:15:19.127000 audit: BPF prog-id=157 op=LOAD Dec 16 14:15:19.128000 audit: BPF prog-id=158 op=LOAD Dec 16 14:15:19.128000 audit[3449]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3424 pid=3449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:19.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137396563313031346235346365343364663431643239633337663263 Dec 16 14:15:19.129000 audit: BPF prog-id=158 op=UNLOAD Dec 16 14:15:19.129000 audit[3449]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3424 pid=3449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:19.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137396563313031346235346365343364663431643239633337663263 Dec 16 14:15:19.129000 audit: BPF prog-id=159 op=LOAD Dec 16 14:15:19.129000 audit[3449]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3424 pid=3449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:19.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137396563313031346235346365343364663431643239633337663263 Dec 16 14:15:19.129000 audit: BPF prog-id=160 op=LOAD Dec 16 14:15:19.129000 audit[3449]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3424 pid=3449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:19.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137396563313031346235346365343364663431643239633337663263 Dec 16 14:15:19.129000 audit: BPF prog-id=160 op=UNLOAD Dec 16 14:15:19.129000 audit[3449]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3424 pid=3449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:19.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137396563313031346235346365343364663431643239633337663263 Dec 16 14:15:19.129000 audit: BPF prog-id=159 op=UNLOAD Dec 16 14:15:19.129000 audit[3449]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3424 pid=3449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:19.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137396563313031346235346365343364663431643239633337663263 Dec 16 14:15:19.129000 audit: BPF prog-id=161 op=LOAD Dec 16 14:15:19.129000 audit[3449]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3424 pid=3449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:19.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137396563313031346235346365343364663431643239633337663263 Dec 16 14:15:19.209000 audit: BPF prog-id=162 op=LOAD Dec 16 14:15:19.213000 audit: BPF prog-id=163 op=LOAD Dec 16 14:15:19.213000 audit[3512]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3499 pid=3512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:19.213000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461646232303765333536393934366231393638313430346534623336 Dec 16 14:15:19.216000 audit: BPF prog-id=163 op=UNLOAD Dec 16 14:15:19.216000 audit[3512]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3499 pid=3512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:19.216000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461646232303765333536393934366231393638313430346534623336 Dec 16 14:15:19.217000 audit: BPF prog-id=164 op=LOAD Dec 16 14:15:19.217000 audit[3512]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3499 pid=3512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:19.217000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461646232303765333536393934366231393638313430346534623336 Dec 16 14:15:19.222000 audit: BPF prog-id=165 op=LOAD Dec 16 14:15:19.222000 audit[3512]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3499 pid=3512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:19.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461646232303765333536393934366231393638313430346534623336 Dec 16 14:15:19.222000 audit: BPF prog-id=165 op=UNLOAD Dec 16 14:15:19.222000 audit[3512]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3499 pid=3512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:19.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461646232303765333536393934366231393638313430346534623336 Dec 16 14:15:19.223000 audit: BPF prog-id=164 op=UNLOAD Dec 16 14:15:19.223000 audit[3512]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3499 pid=3512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:19.223000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461646232303765333536393934366231393638313430346534623336 Dec 16 14:15:19.223000 audit: BPF prog-id=166 op=LOAD Dec 16 14:15:19.223000 audit[3512]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3499 pid=3512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:19.223000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461646232303765333536393934366231393638313430346534623336 Dec 16 14:15:19.270063 containerd[1683]: time="2025-12-16T14:15:19.269945925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b656d8c45-nzwpw,Uid:f84b3754-a109-4273-9b02-8a8bd2351514,Namespace:calico-system,Attempt:0,} returns sandbox id \"a79ec1014b54ce43df41d29c37f2cea231c54f95f6a01beddff8c779593c3e73\"" Dec 16 14:15:19.276277 containerd[1683]: time="2025-12-16T14:15:19.276212012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 14:15:19.374009 containerd[1683]: time="2025-12-16T14:15:19.373830997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nhh62,Uid:3ba2b616-70a2-46e6-b98c-469d6ea44222,Namespace:calico-system,Attempt:0,} returns sandbox id \"dadb207e3569946b19681404e4b36ea88e7d16bd21e7604736deee9447fc477e\"" Dec 16 14:15:19.480000 audit[3550]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=3550 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:19.480000 audit[3550]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc87e08a60 a2=0 a3=7ffc87e08a4c items=0 ppid=3086 pid=3550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:19.480000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:15:19.484000 audit[3550]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3550 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:19.484000 audit[3550]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc87e08a60 a2=0 a3=0 items=0 ppid=3086 pid=3550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:19.484000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:15:20.553514 kubelet[2975]: E1216 14:15:20.553404 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tfzg7" podUID="1248d2d9-77a6-4a9d-9b93-4af871a2edbf" Dec 16 14:15:20.834861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount767834396.mount: Deactivated successfully. Dec 16 14:15:22.555215 kubelet[2975]: E1216 14:15:22.553605 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tfzg7" podUID="1248d2d9-77a6-4a9d-9b93-4af871a2edbf" Dec 16 14:15:23.052235 containerd[1683]: time="2025-12-16T14:15:23.051487253Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:15:23.054023 containerd[1683]: time="2025-12-16T14:15:23.053977908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Dec 16 14:15:23.054735 containerd[1683]: time="2025-12-16T14:15:23.054702779Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:15:23.069250 containerd[1683]: time="2025-12-16T14:15:23.069144353Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:15:23.071449 containerd[1683]: time="2025-12-16T14:15:23.071284526Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.795011677s" Dec 16 14:15:23.071449 containerd[1683]: time="2025-12-16T14:15:23.071334285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 16 14:15:23.074640 containerd[1683]: time="2025-12-16T14:15:23.074517850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 14:15:23.117225 containerd[1683]: time="2025-12-16T14:15:23.116040191Z" level=info msg="CreateContainer within sandbox \"a79ec1014b54ce43df41d29c37f2cea231c54f95f6a01beddff8c779593c3e73\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 14:15:23.126443 containerd[1683]: time="2025-12-16T14:15:23.126390270Z" level=info msg="Container fa742d8363b774643633349e07759dc639ff4d0fa12244d04f2dbb06075a17d2: CDI devices from CRI Config.CDIDevices: []" Dec 16 14:15:23.134937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount851338239.mount: Deactivated successfully. Dec 16 14:15:23.140048 containerd[1683]: time="2025-12-16T14:15:23.139999697Z" level=info msg="CreateContainer within sandbox \"a79ec1014b54ce43df41d29c37f2cea231c54f95f6a01beddff8c779593c3e73\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fa742d8363b774643633349e07759dc639ff4d0fa12244d04f2dbb06075a17d2\"" Dec 16 14:15:23.145128 containerd[1683]: time="2025-12-16T14:15:23.143379133Z" level=info msg="StartContainer for \"fa742d8363b774643633349e07759dc639ff4d0fa12244d04f2dbb06075a17d2\"" Dec 16 14:15:23.146441 containerd[1683]: time="2025-12-16T14:15:23.146396528Z" level=info msg="connecting to shim fa742d8363b774643633349e07759dc639ff4d0fa12244d04f2dbb06075a17d2" address="unix:///run/containerd/s/19528c4b0dea9c9b712f7c7e6a799d22b264ac966ca9237af73225f2147796da" protocol=ttrpc version=3 Dec 16 14:15:23.210487 systemd[1]: Started cri-containerd-fa742d8363b774643633349e07759dc639ff4d0fa12244d04f2dbb06075a17d2.scope - libcontainer container fa742d8363b774643633349e07759dc639ff4d0fa12244d04f2dbb06075a17d2. Dec 16 14:15:23.247524 kernel: kauditd_printk_skb: 52 callbacks suppressed Dec 16 14:15:23.247967 kernel: audit: type=1334 audit(1765894523.240:563): prog-id=167 op=LOAD Dec 16 14:15:23.240000 audit: BPF prog-id=167 op=LOAD Dec 16 14:15:23.248000 audit: BPF prog-id=168 op=LOAD Dec 16 14:15:23.252941 kernel: audit: type=1334 audit(1765894523.248:564): prog-id=168 op=LOAD Dec 16 14:15:23.253040 kernel: audit: type=1300 audit(1765894523.248:564): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3424 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:23.248000 audit[3561]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3424 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:23.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661373432643833363362373734363433363333333439653037373539 Dec 16 14:15:23.259853 kernel: audit: type=1327 audit(1765894523.248:564): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661373432643833363362373734363433363333333439653037373539 Dec 16 14:15:23.249000 audit: BPF prog-id=168 op=UNLOAD Dec 16 14:15:23.263634 kernel: audit: type=1334 audit(1765894523.249:565): prog-id=168 op=UNLOAD Dec 16 14:15:23.249000 audit[3561]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3424 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:23.266226 kernel: audit: type=1300 audit(1765894523.249:565): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3424 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:23.249000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661373432643833363362373734363433363333333439653037373539 Dec 16 14:15:23.271390 kernel: audit: type=1327 audit(1765894523.249:565): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661373432643833363362373734363433363333333439653037373539 Dec 16 14:15:23.252000 audit: BPF prog-id=169 op=LOAD Dec 16 14:15:23.276030 kernel: audit: type=1334 audit(1765894523.252:566): prog-id=169 op=LOAD Dec 16 14:15:23.276302 kernel: audit: type=1300 audit(1765894523.252:566): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3424 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:23.252000 audit[3561]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3424 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:23.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661373432643833363362373734363433363333333439653037373539 Dec 16 14:15:23.283128 kernel: audit: type=1327 audit(1765894523.252:566): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661373432643833363362373734363433363333333439653037373539 Dec 16 14:15:23.252000 audit: BPF prog-id=170 op=LOAD Dec 16 14:15:23.252000 audit[3561]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3424 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:23.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661373432643833363362373734363433363333333439653037373539 Dec 16 14:15:23.252000 audit: BPF prog-id=170 op=UNLOAD Dec 16 14:15:23.252000 audit[3561]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3424 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:23.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661373432643833363362373734363433363333333439653037373539 Dec 16 14:15:23.253000 audit: BPF prog-id=169 op=UNLOAD Dec 16 14:15:23.253000 audit[3561]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3424 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:23.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661373432643833363362373734363433363333333439653037373539 Dec 16 14:15:23.253000 audit: BPF prog-id=171 op=LOAD Dec 16 14:15:23.253000 audit[3561]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3424 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:23.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661373432643833363362373734363433363333333439653037373539 Dec 16 14:15:23.340525 containerd[1683]: time="2025-12-16T14:15:23.340382515Z" level=info msg="StartContainer for \"fa742d8363b774643633349e07759dc639ff4d0fa12244d04f2dbb06075a17d2\" returns successfully" Dec 16 14:15:23.802122 kubelet[2975]: E1216 14:15:23.801960 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:23.802122 kubelet[2975]: W1216 14:15:23.802001 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:23.802122 kubelet[2975]: E1216 14:15:23.802036 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:23.803653 kubelet[2975]: E1216 14:15:23.803483 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:23.803653 kubelet[2975]: W1216 14:15:23.803501 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:23.803653 kubelet[2975]: E1216 14:15:23.803525 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:23.804333 kubelet[2975]: E1216 14:15:23.803836 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:23.804333 kubelet[2975]: W1216 14:15:23.803849 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:23.804333 kubelet[2975]: E1216 14:15:23.803863 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:23.805027 kubelet[2975]: E1216 14:15:23.805008 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:23.805256 kubelet[2975]: W1216 14:15:23.805128 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:23.805256 kubelet[2975]: E1216 14:15:23.805153 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:23.806102 kubelet[2975]: E1216 14:15:23.805748 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:23.806102 kubelet[2975]: W1216 14:15:23.805938 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:23.806102 kubelet[2975]: E1216 14:15:23.805959 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:23.806702 kubelet[2975]: E1216 14:15:23.806602 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:23.806702 kubelet[2975]: W1216 14:15:23.806620 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:23.806702 kubelet[2975]: E1216 14:15:23.806635 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:23.807650 kubelet[2975]: E1216 14:15:23.807554 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:23.807650 kubelet[2975]: W1216 14:15:23.807573 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:23.807650 kubelet[2975]: E1216 14:15:23.807588 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:23.808116 kubelet[2975]: E1216 14:15:23.808014 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:23.808116 kubelet[2975]: W1216 14:15:23.808032 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:23.808116 kubelet[2975]: E1216 14:15:23.808047 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:23.808689 kubelet[2975]: E1216 14:15:23.808574 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:23.808689 kubelet[2975]: W1216 14:15:23.808591 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:23.808689 kubelet[2975]: E1216 14:15:23.808606 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:23.809335 kubelet[2975]: E1216 14:15:23.809171 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:23.809335 kubelet[2975]: W1216 14:15:23.809238 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:23.809335 kubelet[2975]: E1216 14:15:23.809254 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:23.810117 kubelet[2975]: E1216 14:15:23.810030 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:23.810117 kubelet[2975]: W1216 14:15:23.810048 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:23.810117 kubelet[2975]: E1216 14:15:23.810063 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:23.811002 kubelet[2975]: E1216 14:15:23.810863 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:23.811002 kubelet[2975]: W1216 14:15:23.810904 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:23.811002 kubelet[2975]: E1216 14:15:23.810921 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:23.813199 kubelet[2975]: E1216 14:15:23.813047 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:23.813443 kubelet[2975]: W1216 14:15:23.813302 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:23.813443 kubelet[2975]: E1216 14:15:23.813334 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:23.813924 kubelet[2975]: E1216 14:15:23.813905 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:23.814149 kubelet[2975]: W1216 14:15:23.814008 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:23.814149 kubelet[2975]: E1216 14:15:23.814033 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:23.814833 kubelet[2975]: E1216 14:15:23.814801 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:23.815321 kubelet[2975]: W1216 14:15:23.815053 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:23.815321 kubelet[2975]: E1216 14:15:23.815080 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:23.851787 kubelet[2975]: E1216 14:15:23.851741 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:23.852157 kubelet[2975]: W1216 14:15:23.851994 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:23.852157 kubelet[2975]: E1216 14:15:23.852034 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:23.854512 kubelet[2975]: E1216 14:15:23.854455 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:23.854512 kubelet[2975]: W1216 14:15:23.854475 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:23.854512 kubelet[2975]: E1216 14:15:23.854491 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:23.855080 kubelet[2975]: E1216 14:15:23.855026 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:23.855080 kubelet[2975]: W1216 14:15:23.855045 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:23.855080 kubelet[2975]: E1216 14:15:23.855059 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:23.856442 kubelet[2975]: E1216 14:15:23.856386 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:23.856442 kubelet[2975]: W1216 14:15:23.856405 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:23.856442 kubelet[2975]: E1216 14:15:23.856421 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:23.856975 kubelet[2975]: E1216 14:15:23.856919 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:23.856975 kubelet[2975]: W1216 14:15:23.856937 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:23.856975 kubelet[2975]: E1216 14:15:23.856952 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:23.857483 kubelet[2975]: E1216 14:15:23.857432 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:23.857483 kubelet[2975]: W1216 14:15:23.857449 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:23.857483 kubelet[2975]: E1216 14:15:23.857463 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:23.857996 kubelet[2975]: E1216 14:15:23.857945 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:23.857996 kubelet[2975]: W1216 14:15:23.857962 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:23.857996 kubelet[2975]: E1216 14:15:23.857976 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:23.858530 kubelet[2975]: E1216 14:15:23.858474 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:23.858530 kubelet[2975]: W1216 14:15:23.858495 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:23.858530 kubelet[2975]: E1216 14:15:23.858510 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:23.860209 kubelet[2975]: E1216 14:15:23.859270 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:23.860330 kubelet[2975]: W1216 14:15:23.860309 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:23.860422 kubelet[2975]: E1216 14:15:23.860403 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:23.860899 kubelet[2975]: E1216 14:15:23.860846 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:23.860899 kubelet[2975]: W1216 14:15:23.860864 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:23.860899 kubelet[2975]: E1216 14:15:23.860879 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:23.861439 kubelet[2975]: E1216 14:15:23.861388 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:23.861439 kubelet[2975]: W1216 14:15:23.861405 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:23.861439 kubelet[2975]: E1216 14:15:23.861419 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:23.862253 kubelet[2975]: E1216 14:15:23.862198 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:23.862253 kubelet[2975]: W1216 14:15:23.862218 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:23.862253 kubelet[2975]: E1216 14:15:23.862232 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:23.863269 kubelet[2975]: E1216 14:15:23.862736 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:23.863269 kubelet[2975]: W1216 14:15:23.863228 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:23.863269 kubelet[2975]: E1216 14:15:23.863248 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:23.863964 kubelet[2975]: E1216 14:15:23.863912 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:23.863964 kubelet[2975]: W1216 14:15:23.863929 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:23.863964 kubelet[2975]: E1216 14:15:23.863944 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:23.865269 kubelet[2975]: E1216 14:15:23.864415 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:23.865269 kubelet[2975]: W1216 14:15:23.865226 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:23.865269 kubelet[2975]: E1216 14:15:23.865247 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:23.865824 kubelet[2975]: E1216 14:15:23.865771 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:23.865824 kubelet[2975]: W1216 14:15:23.865789 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:23.865824 kubelet[2975]: E1216 14:15:23.865804 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:23.866710 kubelet[2975]: E1216 14:15:23.866618 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:23.866710 kubelet[2975]: W1216 14:15:23.866637 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:23.866710 kubelet[2975]: E1216 14:15:23.866652 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:23.874451 kubelet[2975]: E1216 14:15:23.874418 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:23.874732 kubelet[2975]: W1216 14:15:23.874608 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:23.874732 kubelet[2975]: E1216 14:15:23.874639 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:24.554362 kubelet[2975]: E1216 14:15:24.554286 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tfzg7" podUID="1248d2d9-77a6-4a9d-9b93-4af871a2edbf" Dec 16 14:15:24.674837 containerd[1683]: time="2025-12-16T14:15:24.674752804Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:15:24.676474 containerd[1683]: time="2025-12-16T14:15:24.676419394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 16 14:15:24.677369 containerd[1683]: time="2025-12-16T14:15:24.677326648Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:15:24.680858 containerd[1683]: time="2025-12-16T14:15:24.680787719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:15:24.682034 containerd[1683]: time="2025-12-16T14:15:24.681598468Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.607021573s" Dec 16 14:15:24.704552 containerd[1683]: time="2025-12-16T14:15:24.681639887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 16 14:15:24.709129 containerd[1683]: time="2025-12-16T14:15:24.709086354Z" level=info msg="CreateContainer within sandbox \"dadb207e3569946b19681404e4b36ea88e7d16bd21e7604736deee9447fc477e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 14:15:24.719541 containerd[1683]: time="2025-12-16T14:15:24.719507802Z" level=info msg="Container 7dc8d75f8182e112481e75149602c9d483bb3eb43e3a5a404b5647547053ec3b: CDI devices from CRI Config.CDIDevices: []" Dec 16 14:15:24.737154 kubelet[2975]: I1216 14:15:24.737073 2975 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 14:15:24.756913 containerd[1683]: time="2025-12-16T14:15:24.756852033Z" level=info msg="CreateContainer within sandbox \"dadb207e3569946b19681404e4b36ea88e7d16bd21e7604736deee9447fc477e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7dc8d75f8182e112481e75149602c9d483bb3eb43e3a5a404b5647547053ec3b\"" Dec 16 14:15:24.764202 containerd[1683]: time="2025-12-16T14:15:24.763499798Z" level=info msg="StartContainer for \"7dc8d75f8182e112481e75149602c9d483bb3eb43e3a5a404b5647547053ec3b\"" Dec 16 14:15:24.765669 containerd[1683]: time="2025-12-16T14:15:24.765627753Z" level=info msg="connecting to shim 7dc8d75f8182e112481e75149602c9d483bb3eb43e3a5a404b5647547053ec3b" address="unix:///run/containerd/s/0fede2e378026f954fe5c6b71716da0b0db7ce1c86aca98ac7aa47a16263e756" protocol=ttrpc version=3 Dec 16 14:15:24.797484 systemd[1]: Started cri-containerd-7dc8d75f8182e112481e75149602c9d483bb3eb43e3a5a404b5647547053ec3b.scope - libcontainer container 7dc8d75f8182e112481e75149602c9d483bb3eb43e3a5a404b5647547053ec3b. Dec 16 14:15:24.824662 kubelet[2975]: E1216 14:15:24.824527 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:24.824662 kubelet[2975]: W1216 14:15:24.824572 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:24.824662 kubelet[2975]: E1216 14:15:24.824604 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:24.826347 kubelet[2975]: E1216 14:15:24.824880 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:24.826347 kubelet[2975]: W1216 14:15:24.824893 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:24.826347 kubelet[2975]: E1216 14:15:24.824929 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:24.826347 kubelet[2975]: E1216 14:15:24.825553 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:24.826347 kubelet[2975]: W1216 14:15:24.825572 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:24.826347 kubelet[2975]: E1216 14:15:24.825588 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:24.826347 kubelet[2975]: E1216 14:15:24.826034 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:24.826347 kubelet[2975]: W1216 14:15:24.826052 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:24.826347 kubelet[2975]: E1216 14:15:24.826066 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:24.827840 kubelet[2975]: E1216 14:15:24.826497 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:24.827840 kubelet[2975]: W1216 14:15:24.826509 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:24.827840 kubelet[2975]: E1216 14:15:24.826523 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:24.827840 kubelet[2975]: E1216 14:15:24.826813 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:24.827840 kubelet[2975]: W1216 14:15:24.826845 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:24.827840 kubelet[2975]: E1216 14:15:24.826861 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:24.827840 kubelet[2975]: E1216 14:15:24.827152 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:24.827840 kubelet[2975]: W1216 14:15:24.827200 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:24.827840 kubelet[2975]: E1216 14:15:24.827218 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:24.827840 kubelet[2975]: E1216 14:15:24.827491 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:24.829006 kubelet[2975]: W1216 14:15:24.827504 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:24.829006 kubelet[2975]: E1216 14:15:24.827517 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:24.829006 kubelet[2975]: E1216 14:15:24.827817 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:24.829006 kubelet[2975]: W1216 14:15:24.827847 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:24.829006 kubelet[2975]: E1216 14:15:24.827864 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:24.829006 kubelet[2975]: E1216 14:15:24.828146 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:24.829006 kubelet[2975]: W1216 14:15:24.828158 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:24.829006 kubelet[2975]: E1216 14:15:24.828204 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:24.829006 kubelet[2975]: E1216 14:15:24.828481 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:24.829006 kubelet[2975]: W1216 14:15:24.828494 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:24.830087 kubelet[2975]: E1216 14:15:24.828507 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:24.830087 kubelet[2975]: E1216 14:15:24.828897 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:24.830087 kubelet[2975]: W1216 14:15:24.828911 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:24.830087 kubelet[2975]: E1216 14:15:24.828937 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:24.830087 kubelet[2975]: E1216 14:15:24.829395 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:24.830087 kubelet[2975]: W1216 14:15:24.829408 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:24.830087 kubelet[2975]: E1216 14:15:24.829421 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:24.830087 kubelet[2975]: E1216 14:15:24.829679 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:24.830087 kubelet[2975]: W1216 14:15:24.829693 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:24.830087 kubelet[2975]: E1216 14:15:24.829707 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:24.830898 kubelet[2975]: E1216 14:15:24.830517 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:24.830898 kubelet[2975]: W1216 14:15:24.830530 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:24.830898 kubelet[2975]: E1216 14:15:24.830544 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:24.866000 audit: BPF prog-id=172 op=LOAD Dec 16 14:15:24.866000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3499 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:24.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764633864373566383138326531313234383165373531343936303263 Dec 16 14:15:24.866000 audit: BPF prog-id=173 op=LOAD Dec 16 14:15:24.866000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=3499 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:24.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764633864373566383138326531313234383165373531343936303263 Dec 16 14:15:24.866000 audit: BPF prog-id=173 op=UNLOAD Dec 16 14:15:24.866000 audit[3637]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3499 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:24.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764633864373566383138326531313234383165373531343936303263 Dec 16 14:15:24.866000 audit: BPF prog-id=172 op=UNLOAD Dec 16 14:15:24.866000 audit[3637]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3499 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:24.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764633864373566383138326531313234383165373531343936303263 Dec 16 14:15:24.866000 audit: BPF prog-id=174 op=LOAD Dec 16 14:15:24.866000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=3499 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:24.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764633864373566383138326531313234383165373531343936303263 Dec 16 14:15:24.871456 kubelet[2975]: E1216 14:15:24.868129 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:24.871456 kubelet[2975]: W1216 14:15:24.868152 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:24.871456 kubelet[2975]: E1216 14:15:24.868817 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:24.871456 kubelet[2975]: E1216 14:15:24.869529 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:24.871456 kubelet[2975]: W1216 14:15:24.869543 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:24.871456 kubelet[2975]: E1216 14:15:24.869713 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:24.871456 kubelet[2975]: E1216 14:15:24.870123 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:24.871456 kubelet[2975]: W1216 14:15:24.870139 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:24.871456 kubelet[2975]: E1216 14:15:24.870163 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:24.872641 kubelet[2975]: E1216 14:15:24.871461 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:24.872641 kubelet[2975]: W1216 14:15:24.871476 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:24.872641 kubelet[2975]: E1216 14:15:24.871491 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:24.872641 kubelet[2975]: E1216 14:15:24.872225 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:24.872641 kubelet[2975]: W1216 14:15:24.872241 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:24.872641 kubelet[2975]: E1216 14:15:24.872256 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:24.872641 kubelet[2975]: E1216 14:15:24.872507 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:24.872641 kubelet[2975]: W1216 14:15:24.872523 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:24.872641 kubelet[2975]: E1216 14:15:24.872537 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:24.874963 kubelet[2975]: E1216 14:15:24.872869 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:24.874963 kubelet[2975]: W1216 14:15:24.872882 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:24.874963 kubelet[2975]: E1216 14:15:24.872899 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:24.874963 kubelet[2975]: E1216 14:15:24.874215 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:24.874963 kubelet[2975]: W1216 14:15:24.874232 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:24.874963 kubelet[2975]: E1216 14:15:24.874247 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:24.875586 kubelet[2975]: E1216 14:15:24.875502 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:24.875659 kubelet[2975]: W1216 14:15:24.875587 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:24.875659 kubelet[2975]: E1216 14:15:24.875605 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:24.876593 kubelet[2975]: E1216 14:15:24.876439 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:24.876593 kubelet[2975]: W1216 14:15:24.876458 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:24.876593 kubelet[2975]: E1216 14:15:24.876474 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:24.877026 kubelet[2975]: E1216 14:15:24.876818 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:24.877026 kubelet[2975]: W1216 14:15:24.876947 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:24.877026 kubelet[2975]: E1216 14:15:24.876965 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:24.878205 kubelet[2975]: E1216 14:15:24.877824 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:24.878205 kubelet[2975]: W1216 14:15:24.877842 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:24.878205 kubelet[2975]: E1216 14:15:24.877858 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:24.878927 kubelet[2975]: E1216 14:15:24.878614 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:24.878927 kubelet[2975]: W1216 14:15:24.878644 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:24.878927 kubelet[2975]: E1216 14:15:24.878672 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:24.879527 kubelet[2975]: E1216 14:15:24.879492 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:24.879780 kubelet[2975]: W1216 14:15:24.879684 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:24.879982 kubelet[2975]: E1216 14:15:24.879709 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:24.881031 kubelet[2975]: E1216 14:15:24.880986 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:24.881393 kubelet[2975]: W1216 14:15:24.881117 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:24.881393 kubelet[2975]: E1216 14:15:24.881155 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:24.882445 kubelet[2975]: E1216 14:15:24.882278 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:24.882445 kubelet[2975]: W1216 14:15:24.882296 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:24.882445 kubelet[2975]: E1216 14:15:24.882312 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:24.883222 kubelet[2975]: E1216 14:15:24.882803 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:24.883222 kubelet[2975]: W1216 14:15:24.882820 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:24.883222 kubelet[2975]: E1216 14:15:24.882835 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:24.884277 kubelet[2975]: E1216 14:15:24.884256 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:15:24.884277 kubelet[2975]: W1216 14:15:24.884275 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:15:24.884391 kubelet[2975]: E1216 14:15:24.884290 2975 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:15:24.911573 containerd[1683]: time="2025-12-16T14:15:24.911525614Z" level=info msg="StartContainer for \"7dc8d75f8182e112481e75149602c9d483bb3eb43e3a5a404b5647547053ec3b\" returns successfully" Dec 16 14:15:24.927581 systemd[1]: cri-containerd-7dc8d75f8182e112481e75149602c9d483bb3eb43e3a5a404b5647547053ec3b.scope: Deactivated successfully. Dec 16 14:15:24.930000 audit: BPF prog-id=174 op=UNLOAD Dec 16 14:15:24.946840 containerd[1683]: time="2025-12-16T14:15:24.946672933Z" level=info msg="received container exit event container_id:\"7dc8d75f8182e112481e75149602c9d483bb3eb43e3a5a404b5647547053ec3b\" id:\"7dc8d75f8182e112481e75149602c9d483bb3eb43e3a5a404b5647547053ec3b\" pid:3649 exited_at:{seconds:1765894524 nanos:933589718}" Dec 16 14:15:24.977920 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7dc8d75f8182e112481e75149602c9d483bb3eb43e3a5a404b5647547053ec3b-rootfs.mount: Deactivated successfully. Dec 16 14:15:25.745762 containerd[1683]: time="2025-12-16T14:15:25.745628099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 14:15:25.767897 kubelet[2975]: I1216 14:15:25.766863 2975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-b656d8c45-nzwpw" podStartSLOduration=3.967260903 podStartE2EDuration="7.766825603s" podCreationTimestamp="2025-12-16 14:15:18 +0000 UTC" firstStartedPulling="2025-12-16 14:15:19.273435975 +0000 UTC m=+28.045077765" lastFinishedPulling="2025-12-16 14:15:23.073000664 +0000 UTC m=+31.844642465" observedRunningTime="2025-12-16 14:15:23.78005118 +0000 UTC m=+32.551692988" watchObservedRunningTime="2025-12-16 14:15:25.766825603 +0000 UTC m=+34.538467406" Dec 16 14:15:26.553513 kubelet[2975]: E1216 14:15:26.553401 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tfzg7" podUID="1248d2d9-77a6-4a9d-9b93-4af871a2edbf" Dec 16 14:15:28.554039 kubelet[2975]: E1216 14:15:28.553759 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tfzg7" podUID="1248d2d9-77a6-4a9d-9b93-4af871a2edbf" Dec 16 14:15:30.558854 kubelet[2975]: E1216 14:15:30.558766 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tfzg7" podUID="1248d2d9-77a6-4a9d-9b93-4af871a2edbf" Dec 16 14:15:31.155363 containerd[1683]: time="2025-12-16T14:15:31.155306997Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:15:31.159606 containerd[1683]: time="2025-12-16T14:15:31.159569253Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Dec 16 14:15:31.159730 containerd[1683]: time="2025-12-16T14:15:31.159657277Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:15:31.163126 containerd[1683]: time="2025-12-16T14:15:31.163090183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:15:31.163668 containerd[1683]: time="2025-12-16T14:15:31.163630763Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 5.417896852s" Dec 16 14:15:31.163761 containerd[1683]: time="2025-12-16T14:15:31.163671027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 16 14:15:31.185814 containerd[1683]: time="2025-12-16T14:15:31.185756126Z" level=info msg="CreateContainer within sandbox \"dadb207e3569946b19681404e4b36ea88e7d16bd21e7604736deee9447fc477e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 14:15:31.199855 containerd[1683]: time="2025-12-16T14:15:31.199365857Z" level=info msg="Container 646919c7dd3a8489a9d19924809f09edda01cb32372f74dee6261da4f420290a: CDI devices from CRI Config.CDIDevices: []" Dec 16 14:15:31.207536 containerd[1683]: time="2025-12-16T14:15:31.207483856Z" level=info msg="CreateContainer within sandbox \"dadb207e3569946b19681404e4b36ea88e7d16bd21e7604736deee9447fc477e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"646919c7dd3a8489a9d19924809f09edda01cb32372f74dee6261da4f420290a\"" Dec 16 14:15:31.210526 containerd[1683]: time="2025-12-16T14:15:31.210485436Z" level=info msg="StartContainer for \"646919c7dd3a8489a9d19924809f09edda01cb32372f74dee6261da4f420290a\"" Dec 16 14:15:31.213817 containerd[1683]: time="2025-12-16T14:15:31.213783380Z" level=info msg="connecting to shim 646919c7dd3a8489a9d19924809f09edda01cb32372f74dee6261da4f420290a" address="unix:///run/containerd/s/0fede2e378026f954fe5c6b71716da0b0db7ce1c86aca98ac7aa47a16263e756" protocol=ttrpc version=3 Dec 16 14:15:31.255851 systemd[1]: Started cri-containerd-646919c7dd3a8489a9d19924809f09edda01cb32372f74dee6261da4f420290a.scope - libcontainer container 646919c7dd3a8489a9d19924809f09edda01cb32372f74dee6261da4f420290a. Dec 16 14:15:31.343959 kernel: kauditd_printk_skb: 28 callbacks suppressed Dec 16 14:15:31.345804 kernel: audit: type=1334 audit(1765894531.336:577): prog-id=175 op=LOAD Dec 16 14:15:31.336000 audit: BPF prog-id=175 op=LOAD Dec 16 14:15:31.336000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3499 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:31.353309 kernel: audit: type=1300 audit(1765894531.336:577): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3499 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:31.353422 kernel: audit: type=1327 audit(1765894531.336:577): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634363931396337646433613834383961396431393932343830396630 Dec 16 14:15:31.336000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634363931396337646433613834383961396431393932343830396630 Dec 16 14:15:31.345000 audit: BPF prog-id=176 op=LOAD Dec 16 14:15:31.359725 kernel: audit: type=1334 audit(1765894531.345:578): prog-id=176 op=LOAD Dec 16 14:15:31.345000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3499 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:31.345000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634363931396337646433613834383961396431393932343830396630 Dec 16 14:15:31.367199 kernel: audit: type=1300 audit(1765894531.345:578): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3499 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:31.367299 kernel: audit: type=1327 audit(1765894531.345:578): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634363931396337646433613834383961396431393932343830396630 Dec 16 14:15:31.371220 kernel: audit: type=1334 audit(1765894531.346:579): prog-id=176 op=UNLOAD Dec 16 14:15:31.346000 audit: BPF prog-id=176 op=UNLOAD Dec 16 14:15:31.346000 audit[3732]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3499 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:31.374467 kernel: audit: type=1300 audit(1765894531.346:579): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3499 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:31.346000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634363931396337646433613834383961396431393932343830396630 Dec 16 14:15:31.379924 kernel: audit: type=1327 audit(1765894531.346:579): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634363931396337646433613834383961396431393932343830396630 Dec 16 14:15:31.357000 audit: BPF prog-id=175 op=UNLOAD Dec 16 14:15:31.357000 audit[3732]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3499 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:31.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634363931396337646433613834383961396431393932343830396630 Dec 16 14:15:31.357000 audit: BPF prog-id=177 op=LOAD Dec 16 14:15:31.386199 kernel: audit: type=1334 audit(1765894531.357:580): prog-id=175 op=UNLOAD Dec 16 14:15:31.357000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3499 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:31.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634363931396337646433613834383961396431393932343830396630 Dec 16 14:15:31.509016 containerd[1683]: time="2025-12-16T14:15:31.508957969Z" level=info msg="StartContainer for \"646919c7dd3a8489a9d19924809f09edda01cb32372f74dee6261da4f420290a\" returns successfully" Dec 16 14:15:31.843206 kubelet[2975]: E1216 14:15:31.842977 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tfzg7" podUID="1248d2d9-77a6-4a9d-9b93-4af871a2edbf" Dec 16 14:15:32.713413 systemd[1]: cri-containerd-646919c7dd3a8489a9d19924809f09edda01cb32372f74dee6261da4f420290a.scope: Deactivated successfully. Dec 16 14:15:32.713886 systemd[1]: cri-containerd-646919c7dd3a8489a9d19924809f09edda01cb32372f74dee6261da4f420290a.scope: Consumed 820ms CPU time, 165.5M memory peak, 5.7M read from disk, 171.3M written to disk. Dec 16 14:15:32.717000 audit: BPF prog-id=177 op=UNLOAD Dec 16 14:15:32.754668 containerd[1683]: time="2025-12-16T14:15:32.754616170Z" level=info msg="received container exit event container_id:\"646919c7dd3a8489a9d19924809f09edda01cb32372f74dee6261da4f420290a\" id:\"646919c7dd3a8489a9d19924809f09edda01cb32372f74dee6261da4f420290a\" pid:3745 exited_at:{seconds:1765894532 nanos:754095316}" Dec 16 14:15:32.783215 kubelet[2975]: I1216 14:15:32.781372 2975 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 14:15:32.838340 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-646919c7dd3a8489a9d19924809f09edda01cb32372f74dee6261da4f420290a-rootfs.mount: Deactivated successfully. Dec 16 14:15:32.896161 systemd[1]: Created slice kubepods-burstable-pod764f81db_5c32_41e4_8912_f297ff0e1255.slice - libcontainer container kubepods-burstable-pod764f81db_5c32_41e4_8912_f297ff0e1255.slice. Dec 16 14:15:32.923073 systemd[1]: Created slice kubepods-besteffort-pod6e75d316_f851_4233_b053_cd9dc148b92b.slice - libcontainer container kubepods-besteffort-pod6e75d316_f851_4233_b053_cd9dc148b92b.slice. Dec 16 14:15:32.940207 containerd[1683]: time="2025-12-16T14:15:32.939084545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 14:15:32.944248 systemd[1]: Created slice kubepods-burstable-pod142f5ab3_cbeb_48ba_8db9_bbe60551f068.slice - libcontainer container kubepods-burstable-pod142f5ab3_cbeb_48ba_8db9_bbe60551f068.slice. Dec 16 14:15:32.960100 systemd[1]: Created slice kubepods-besteffort-podd3330c44_23de_4b0d_b566_d21abd5c2c90.slice - libcontainer container kubepods-besteffort-podd3330c44_23de_4b0d_b566_d21abd5c2c90.slice. Dec 16 14:15:32.971988 systemd[1]: Created slice kubepods-besteffort-pod86b288c9_63f1_4f44_8c9e_eb5a65f83789.slice - libcontainer container kubepods-besteffort-pod86b288c9_63f1_4f44_8c9e_eb5a65f83789.slice. Dec 16 14:15:32.988790 systemd[1]: Created slice kubepods-besteffort-pod5790375d_68f1_4555_984f_974084235d42.slice - libcontainer container kubepods-besteffort-pod5790375d_68f1_4555_984f_974084235d42.slice. Dec 16 14:15:33.003000 systemd[1]: Created slice kubepods-besteffort-pod8ce9819e_d54a_4fa9_97c3_d8a0be2ecea0.slice - libcontainer container kubepods-besteffort-pod8ce9819e_d54a_4fa9_97c3_d8a0be2ecea0.slice. Dec 16 14:15:33.043764 kubelet[2975]: I1216 14:15:33.043682 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d3330c44-23de-4b0d-b566-d21abd5c2c90-whisker-backend-key-pair\") pod \"whisker-54dc96c98f-scrv7\" (UID: \"d3330c44-23de-4b0d-b566-d21abd5c2c90\") " pod="calico-system/whisker-54dc96c98f-scrv7" Dec 16 14:15:33.043764 kubelet[2975]: I1216 14:15:33.043746 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwxk7\" (UniqueName: \"kubernetes.io/projected/d3330c44-23de-4b0d-b566-d21abd5c2c90-kube-api-access-xwxk7\") pod \"whisker-54dc96c98f-scrv7\" (UID: \"d3330c44-23de-4b0d-b566-d21abd5c2c90\") " pod="calico-system/whisker-54dc96c98f-scrv7" Dec 16 14:15:33.043764 kubelet[2975]: I1216 14:15:33.043781 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/86b288c9-63f1-4f44-8c9e-eb5a65f83789-goldmane-key-pair\") pod \"goldmane-666569f655-2m9mr\" (UID: \"86b288c9-63f1-4f44-8c9e-eb5a65f83789\") " pod="calico-system/goldmane-666569f655-2m9mr" Dec 16 14:15:33.044964 kubelet[2975]: I1216 14:15:33.043831 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/764f81db-5c32-41e4-8912-f297ff0e1255-config-volume\") pod \"coredns-674b8bbfcf-t2lk6\" (UID: \"764f81db-5c32-41e4-8912-f297ff0e1255\") " pod="kube-system/coredns-674b8bbfcf-t2lk6" Dec 16 14:15:33.044964 kubelet[2975]: I1216 14:15:33.043865 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6e75d316-f851-4233-b053-cd9dc148b92b-calico-apiserver-certs\") pod \"calico-apiserver-66659c8785-m7865\" (UID: \"6e75d316-f851-4233-b053-cd9dc148b92b\") " pod="calico-apiserver/calico-apiserver-66659c8785-m7865" Dec 16 14:15:33.044964 kubelet[2975]: I1216 14:15:33.043898 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkmx5\" (UniqueName: \"kubernetes.io/projected/6e75d316-f851-4233-b053-cd9dc148b92b-kube-api-access-dkmx5\") pod \"calico-apiserver-66659c8785-m7865\" (UID: \"6e75d316-f851-4233-b053-cd9dc148b92b\") " pod="calico-apiserver/calico-apiserver-66659c8785-m7865" Dec 16 14:15:33.044964 kubelet[2975]: I1216 14:15:33.043944 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/142f5ab3-cbeb-48ba-8db9-bbe60551f068-config-volume\") pod \"coredns-674b8bbfcf-x9ncs\" (UID: \"142f5ab3-cbeb-48ba-8db9-bbe60551f068\") " pod="kube-system/coredns-674b8bbfcf-x9ncs" Dec 16 14:15:33.044964 kubelet[2975]: I1216 14:15:33.043982 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl7nt\" (UniqueName: \"kubernetes.io/projected/764f81db-5c32-41e4-8912-f297ff0e1255-kube-api-access-rl7nt\") pod \"coredns-674b8bbfcf-t2lk6\" (UID: \"764f81db-5c32-41e4-8912-f297ff0e1255\") " pod="kube-system/coredns-674b8bbfcf-t2lk6" Dec 16 14:15:33.045656 kubelet[2975]: I1216 14:15:33.044012 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8ce9819e-d54a-4fa9-97c3-d8a0be2ecea0-calico-apiserver-certs\") pod \"calico-apiserver-66659c8785-q7tzf\" (UID: \"8ce9819e-d54a-4fa9-97c3-d8a0be2ecea0\") " pod="calico-apiserver/calico-apiserver-66659c8785-q7tzf" Dec 16 14:15:33.045656 kubelet[2975]: I1216 14:15:33.044040 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbb22\" (UniqueName: \"kubernetes.io/projected/8ce9819e-d54a-4fa9-97c3-d8a0be2ecea0-kube-api-access-fbb22\") pod \"calico-apiserver-66659c8785-q7tzf\" (UID: \"8ce9819e-d54a-4fa9-97c3-d8a0be2ecea0\") " pod="calico-apiserver/calico-apiserver-66659c8785-q7tzf" Dec 16 14:15:33.045656 kubelet[2975]: I1216 14:15:33.044064 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3330c44-23de-4b0d-b566-d21abd5c2c90-whisker-ca-bundle\") pod \"whisker-54dc96c98f-scrv7\" (UID: \"d3330c44-23de-4b0d-b566-d21abd5c2c90\") " pod="calico-system/whisker-54dc96c98f-scrv7" Dec 16 14:15:33.045656 kubelet[2975]: I1216 14:15:33.044089 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb8l9\" (UniqueName: \"kubernetes.io/projected/86b288c9-63f1-4f44-8c9e-eb5a65f83789-kube-api-access-tb8l9\") pod \"goldmane-666569f655-2m9mr\" (UID: \"86b288c9-63f1-4f44-8c9e-eb5a65f83789\") " pod="calico-system/goldmane-666569f655-2m9mr" Dec 16 14:15:33.046295 kubelet[2975]: I1216 14:15:33.044164 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5790375d-68f1-4555-984f-974084235d42-tigera-ca-bundle\") pod \"calico-kube-controllers-8c8c88d5b-gxh2p\" (UID: \"5790375d-68f1-4555-984f-974084235d42\") " pod="calico-system/calico-kube-controllers-8c8c88d5b-gxh2p" Dec 16 14:15:33.047078 kubelet[2975]: I1216 14:15:33.046991 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8vnj\" (UniqueName: \"kubernetes.io/projected/142f5ab3-cbeb-48ba-8db9-bbe60551f068-kube-api-access-n8vnj\") pod \"coredns-674b8bbfcf-x9ncs\" (UID: \"142f5ab3-cbeb-48ba-8db9-bbe60551f068\") " pod="kube-system/coredns-674b8bbfcf-x9ncs" Dec 16 14:15:33.047078 kubelet[2975]: I1216 14:15:33.047055 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86b288c9-63f1-4f44-8c9e-eb5a65f83789-config\") pod \"goldmane-666569f655-2m9mr\" (UID: \"86b288c9-63f1-4f44-8c9e-eb5a65f83789\") " pod="calico-system/goldmane-666569f655-2m9mr" Dec 16 14:15:33.047204 kubelet[2975]: I1216 14:15:33.047096 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwp5p\" (UniqueName: \"kubernetes.io/projected/5790375d-68f1-4555-984f-974084235d42-kube-api-access-hwp5p\") pod \"calico-kube-controllers-8c8c88d5b-gxh2p\" (UID: \"5790375d-68f1-4555-984f-974084235d42\") " pod="calico-system/calico-kube-controllers-8c8c88d5b-gxh2p" Dec 16 14:15:33.047204 kubelet[2975]: I1216 14:15:33.047126 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86b288c9-63f1-4f44-8c9e-eb5a65f83789-goldmane-ca-bundle\") pod \"goldmane-666569f655-2m9mr\" (UID: \"86b288c9-63f1-4f44-8c9e-eb5a65f83789\") " pod="calico-system/goldmane-666569f655-2m9mr" Dec 16 14:15:33.243142 containerd[1683]: time="2025-12-16T14:15:33.242706481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66659c8785-m7865,Uid:6e75d316-f851-4233-b053-cd9dc148b92b,Namespace:calico-apiserver,Attempt:0,}" Dec 16 14:15:33.254454 containerd[1683]: time="2025-12-16T14:15:33.254379188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x9ncs,Uid:142f5ab3-cbeb-48ba-8db9-bbe60551f068,Namespace:kube-system,Attempt:0,}" Dec 16 14:15:33.275205 containerd[1683]: time="2025-12-16T14:15:33.275143915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54dc96c98f-scrv7,Uid:d3330c44-23de-4b0d-b566-d21abd5c2c90,Namespace:calico-system,Attempt:0,}" Dec 16 14:15:33.287831 containerd[1683]: time="2025-12-16T14:15:33.287718786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2m9mr,Uid:86b288c9-63f1-4f44-8c9e-eb5a65f83789,Namespace:calico-system,Attempt:0,}" Dec 16 14:15:33.311710 containerd[1683]: time="2025-12-16T14:15:33.311316261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66659c8785-q7tzf,Uid:8ce9819e-d54a-4fa9-97c3-d8a0be2ecea0,Namespace:calico-apiserver,Attempt:0,}" Dec 16 14:15:33.311855 containerd[1683]: time="2025-12-16T14:15:33.311767374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8c8c88d5b-gxh2p,Uid:5790375d-68f1-4555-984f-974084235d42,Namespace:calico-system,Attempt:0,}" Dec 16 14:15:33.516424 containerd[1683]: time="2025-12-16T14:15:33.516046488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t2lk6,Uid:764f81db-5c32-41e4-8912-f297ff0e1255,Namespace:kube-system,Attempt:0,}" Dec 16 14:15:33.565322 systemd[1]: Created slice kubepods-besteffort-pod1248d2d9_77a6_4a9d_9b93_4af871a2edbf.slice - libcontainer container kubepods-besteffort-pod1248d2d9_77a6_4a9d_9b93_4af871a2edbf.slice. Dec 16 14:15:33.570306 containerd[1683]: time="2025-12-16T14:15:33.570260079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tfzg7,Uid:1248d2d9-77a6-4a9d-9b93-4af871a2edbf,Namespace:calico-system,Attempt:0,}" Dec 16 14:15:33.616537 containerd[1683]: time="2025-12-16T14:15:33.616474976Z" level=error msg="Failed to destroy network for sandbox \"725eeeb1427a4bf37ee8cc648d6750cfdf01be801d57fbc71b2c28cfe724838e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:15:33.619646 containerd[1683]: time="2025-12-16T14:15:33.619378418Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66659c8785-q7tzf,Uid:8ce9819e-d54a-4fa9-97c3-d8a0be2ecea0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"725eeeb1427a4bf37ee8cc648d6750cfdf01be801d57fbc71b2c28cfe724838e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:15:33.622883 kubelet[2975]: E1216 14:15:33.620926 2975 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"725eeeb1427a4bf37ee8cc648d6750cfdf01be801d57fbc71b2c28cfe724838e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:15:33.622883 kubelet[2975]: E1216 14:15:33.622284 2975 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"725eeeb1427a4bf37ee8cc648d6750cfdf01be801d57fbc71b2c28cfe724838e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66659c8785-q7tzf" Dec 16 14:15:33.622883 kubelet[2975]: E1216 14:15:33.622331 2975 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"725eeeb1427a4bf37ee8cc648d6750cfdf01be801d57fbc71b2c28cfe724838e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66659c8785-q7tzf" Dec 16 14:15:33.623133 kubelet[2975]: E1216 14:15:33.622420 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66659c8785-q7tzf_calico-apiserver(8ce9819e-d54a-4fa9-97c3-d8a0be2ecea0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66659c8785-q7tzf_calico-apiserver(8ce9819e-d54a-4fa9-97c3-d8a0be2ecea0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"725eeeb1427a4bf37ee8cc648d6750cfdf01be801d57fbc71b2c28cfe724838e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66659c8785-q7tzf" podUID="8ce9819e-d54a-4fa9-97c3-d8a0be2ecea0" Dec 16 14:15:33.639281 containerd[1683]: time="2025-12-16T14:15:33.639200458Z" level=error msg="Failed to destroy network for sandbox \"c4e71d867bd64ca152b539c9bdfa28d0f69c1bac4360ebe5dfd40b15b0a17def\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:15:33.646327 containerd[1683]: time="2025-12-16T14:15:33.646236616Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x9ncs,Uid:142f5ab3-cbeb-48ba-8db9-bbe60551f068,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4e71d867bd64ca152b539c9bdfa28d0f69c1bac4360ebe5dfd40b15b0a17def\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:15:33.647209 kubelet[2975]: E1216 14:15:33.646599 2975 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4e71d867bd64ca152b539c9bdfa28d0f69c1bac4360ebe5dfd40b15b0a17def\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:15:33.647209 kubelet[2975]: E1216 14:15:33.646684 2975 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4e71d867bd64ca152b539c9bdfa28d0f69c1bac4360ebe5dfd40b15b0a17def\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-x9ncs" Dec 16 14:15:33.647209 kubelet[2975]: E1216 14:15:33.646729 2975 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4e71d867bd64ca152b539c9bdfa28d0f69c1bac4360ebe5dfd40b15b0a17def\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-x9ncs" Dec 16 14:15:33.647902 kubelet[2975]: E1216 14:15:33.646805 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-x9ncs_kube-system(142f5ab3-cbeb-48ba-8db9-bbe60551f068)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-x9ncs_kube-system(142f5ab3-cbeb-48ba-8db9-bbe60551f068)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4e71d867bd64ca152b539c9bdfa28d0f69c1bac4360ebe5dfd40b15b0a17def\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-x9ncs" podUID="142f5ab3-cbeb-48ba-8db9-bbe60551f068" Dec 16 14:15:33.692207 containerd[1683]: time="2025-12-16T14:15:33.691528819Z" level=error msg="Failed to destroy network for sandbox \"f647e1597e75f723341ccb3cfe587ff7f5589f30967b64f9a93fca50e9df62bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:15:33.694393 containerd[1683]: time="2025-12-16T14:15:33.694349949Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2m9mr,Uid:86b288c9-63f1-4f44-8c9e-eb5a65f83789,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f647e1597e75f723341ccb3cfe587ff7f5589f30967b64f9a93fca50e9df62bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:15:33.694910 kubelet[2975]: E1216 14:15:33.694789 2975 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f647e1597e75f723341ccb3cfe587ff7f5589f30967b64f9a93fca50e9df62bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:15:33.695065 kubelet[2975]: E1216 14:15:33.695029 2975 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f647e1597e75f723341ccb3cfe587ff7f5589f30967b64f9a93fca50e9df62bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-2m9mr" Dec 16 14:15:33.695579 kubelet[2975]: E1216 14:15:33.695075 2975 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f647e1597e75f723341ccb3cfe587ff7f5589f30967b64f9a93fca50e9df62bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-2m9mr" Dec 16 14:15:33.695836 kubelet[2975]: E1216 14:15:33.695167 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-2m9mr_calico-system(86b288c9-63f1-4f44-8c9e-eb5a65f83789)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-2m9mr_calico-system(86b288c9-63f1-4f44-8c9e-eb5a65f83789)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f647e1597e75f723341ccb3cfe587ff7f5589f30967b64f9a93fca50e9df62bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-2m9mr" podUID="86b288c9-63f1-4f44-8c9e-eb5a65f83789" Dec 16 14:15:33.704547 containerd[1683]: time="2025-12-16T14:15:33.704415419Z" level=error msg="Failed to destroy network for sandbox \"df4cef6ad0245215d853eb59c8e324fbb59fc0139f0111077141ee50261d31df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:15:33.709080 containerd[1683]: time="2025-12-16T14:15:33.709024800Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8c8c88d5b-gxh2p,Uid:5790375d-68f1-4555-984f-974084235d42,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"df4cef6ad0245215d853eb59c8e324fbb59fc0139f0111077141ee50261d31df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:15:33.709546 kubelet[2975]: E1216 14:15:33.709439 2975 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df4cef6ad0245215d853eb59c8e324fbb59fc0139f0111077141ee50261d31df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:15:33.709546 kubelet[2975]: E1216 14:15:33.709534 2975 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df4cef6ad0245215d853eb59c8e324fbb59fc0139f0111077141ee50261d31df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8c8c88d5b-gxh2p" Dec 16 14:15:33.710204 kubelet[2975]: E1216 14:15:33.709572 2975 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df4cef6ad0245215d853eb59c8e324fbb59fc0139f0111077141ee50261d31df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8c8c88d5b-gxh2p" Dec 16 14:15:33.710204 kubelet[2975]: E1216 14:15:33.709653 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8c8c88d5b-gxh2p_calico-system(5790375d-68f1-4555-984f-974084235d42)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8c8c88d5b-gxh2p_calico-system(5790375d-68f1-4555-984f-974084235d42)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df4cef6ad0245215d853eb59c8e324fbb59fc0139f0111077141ee50261d31df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8c8c88d5b-gxh2p" podUID="5790375d-68f1-4555-984f-974084235d42" Dec 16 14:15:33.712237 containerd[1683]: time="2025-12-16T14:15:33.712129753Z" level=error msg="Failed to destroy network for sandbox \"d99ab420e7b4b65205cd369b64f5b0465ca28fe12789bcb326aed54862e88d05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:15:33.714466 containerd[1683]: time="2025-12-16T14:15:33.714384275Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54dc96c98f-scrv7,Uid:d3330c44-23de-4b0d-b566-d21abd5c2c90,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d99ab420e7b4b65205cd369b64f5b0465ca28fe12789bcb326aed54862e88d05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:15:33.715862 kubelet[2975]: E1216 14:15:33.715612 2975 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d99ab420e7b4b65205cd369b64f5b0465ca28fe12789bcb326aed54862e88d05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:15:33.716296 kubelet[2975]: E1216 14:15:33.716033 2975 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d99ab420e7b4b65205cd369b64f5b0465ca28fe12789bcb326aed54862e88d05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54dc96c98f-scrv7" Dec 16 14:15:33.716839 kubelet[2975]: E1216 14:15:33.716435 2975 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d99ab420e7b4b65205cd369b64f5b0465ca28fe12789bcb326aed54862e88d05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54dc96c98f-scrv7" Dec 16 14:15:33.717775 kubelet[2975]: E1216 14:15:33.717491 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-54dc96c98f-scrv7_calico-system(d3330c44-23de-4b0d-b566-d21abd5c2c90)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-54dc96c98f-scrv7_calico-system(d3330c44-23de-4b0d-b566-d21abd5c2c90)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d99ab420e7b4b65205cd369b64f5b0465ca28fe12789bcb326aed54862e88d05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54dc96c98f-scrv7" podUID="d3330c44-23de-4b0d-b566-d21abd5c2c90" Dec 16 14:15:33.719924 containerd[1683]: time="2025-12-16T14:15:33.719833768Z" level=error msg="Failed to destroy network for sandbox \"fa7f0a1876c9b979a0c716237def39ba9294d387190054475b3639a427f04329\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:15:33.725085 containerd[1683]: time="2025-12-16T14:15:33.723716860Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66659c8785-m7865,Uid:6e75d316-f851-4233-b053-cd9dc148b92b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa7f0a1876c9b979a0c716237def39ba9294d387190054475b3639a427f04329\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:15:33.725376 kubelet[2975]: E1216 14:15:33.725322 2975 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa7f0a1876c9b979a0c716237def39ba9294d387190054475b3639a427f04329\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:15:33.725480 kubelet[2975]: E1216 14:15:33.725413 2975 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa7f0a1876c9b979a0c716237def39ba9294d387190054475b3639a427f04329\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66659c8785-m7865" Dec 16 14:15:33.725480 kubelet[2975]: E1216 14:15:33.725451 2975 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa7f0a1876c9b979a0c716237def39ba9294d387190054475b3639a427f04329\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66659c8785-m7865" Dec 16 14:15:33.726376 kubelet[2975]: E1216 14:15:33.725556 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66659c8785-m7865_calico-apiserver(6e75d316-f851-4233-b053-cd9dc148b92b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66659c8785-m7865_calico-apiserver(6e75d316-f851-4233-b053-cd9dc148b92b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa7f0a1876c9b979a0c716237def39ba9294d387190054475b3639a427f04329\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66659c8785-m7865" podUID="6e75d316-f851-4233-b053-cd9dc148b92b" Dec 16 14:15:33.772530 containerd[1683]: time="2025-12-16T14:15:33.771086615Z" level=error msg="Failed to destroy network for sandbox \"25418d2ea9c944aba7fc6bb8332f49a92ba9f027748df2f838763648a57cca26\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:15:33.774301 containerd[1683]: time="2025-12-16T14:15:33.774257627Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tfzg7,Uid:1248d2d9-77a6-4a9d-9b93-4af871a2edbf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"25418d2ea9c944aba7fc6bb8332f49a92ba9f027748df2f838763648a57cca26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:15:33.775253 kubelet[2975]: E1216 14:15:33.774568 2975 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25418d2ea9c944aba7fc6bb8332f49a92ba9f027748df2f838763648a57cca26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:15:33.775253 kubelet[2975]: E1216 14:15:33.774654 2975 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25418d2ea9c944aba7fc6bb8332f49a92ba9f027748df2f838763648a57cca26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tfzg7" Dec 16 14:15:33.775253 kubelet[2975]: E1216 14:15:33.774687 2975 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25418d2ea9c944aba7fc6bb8332f49a92ba9f027748df2f838763648a57cca26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tfzg7" Dec 16 14:15:33.775543 kubelet[2975]: E1216 14:15:33.774781 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tfzg7_calico-system(1248d2d9-77a6-4a9d-9b93-4af871a2edbf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tfzg7_calico-system(1248d2d9-77a6-4a9d-9b93-4af871a2edbf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25418d2ea9c944aba7fc6bb8332f49a92ba9f027748df2f838763648a57cca26\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tfzg7" podUID="1248d2d9-77a6-4a9d-9b93-4af871a2edbf" Dec 16 14:15:33.787772 containerd[1683]: time="2025-12-16T14:15:33.787711963Z" level=error msg="Failed to destroy network for sandbox \"19461754befa93c4bebd32fd200781a9f6388e1945682f7b02e4f68a54f262aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:15:33.789884 containerd[1683]: time="2025-12-16T14:15:33.789840925Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t2lk6,Uid:764f81db-5c32-41e4-8912-f297ff0e1255,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"19461754befa93c4bebd32fd200781a9f6388e1945682f7b02e4f68a54f262aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:15:33.790246 kubelet[2975]: E1216 14:15:33.790109 2975 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19461754befa93c4bebd32fd200781a9f6388e1945682f7b02e4f68a54f262aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:15:33.790246 kubelet[2975]: E1216 14:15:33.790207 2975 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19461754befa93c4bebd32fd200781a9f6388e1945682f7b02e4f68a54f262aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-t2lk6" Dec 16 14:15:33.790246 kubelet[2975]: E1216 14:15:33.790241 2975 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19461754befa93c4bebd32fd200781a9f6388e1945682f7b02e4f68a54f262aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-t2lk6" Dec 16 14:15:33.790687 kubelet[2975]: E1216 14:15:33.790337 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-t2lk6_kube-system(764f81db-5c32-41e4-8912-f297ff0e1255)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-t2lk6_kube-system(764f81db-5c32-41e4-8912-f297ff0e1255)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19461754befa93c4bebd32fd200781a9f6388e1945682f7b02e4f68a54f262aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-t2lk6" podUID="764f81db-5c32-41e4-8912-f297ff0e1255" Dec 16 14:15:39.588222 kubelet[2975]: I1216 14:15:39.587581 2975 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 14:15:39.736000 audit[4004]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=4004 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:39.747311 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 16 14:15:39.747464 kernel: audit: type=1325 audit(1765894539.736:583): table=filter:119 family=2 entries=21 op=nft_register_rule pid=4004 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:39.736000 audit[4004]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff5f120b70 a2=0 a3=7fff5f120b5c items=0 ppid=3086 pid=4004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:39.758223 kernel: audit: type=1300 audit(1765894539.736:583): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff5f120b70 a2=0 a3=7fff5f120b5c items=0 ppid=3086 pid=4004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:39.759052 kernel: audit: type=1327 audit(1765894539.736:583): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:15:39.736000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:15:39.753000 audit[4004]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=4004 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:39.767312 kernel: audit: type=1325 audit(1765894539.753:584): table=nat:120 family=2 entries=19 op=nft_register_chain pid=4004 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:39.753000 audit[4004]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff5f120b70 a2=0 a3=7fff5f120b5c items=0 ppid=3086 pid=4004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:39.774296 kernel: audit: type=1300 audit(1765894539.753:584): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff5f120b70 a2=0 a3=7fff5f120b5c items=0 ppid=3086 pid=4004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:39.753000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:15:39.779235 kernel: audit: type=1327 audit(1765894539.753:584): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:15:44.814634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount420666052.mount: Deactivated successfully. Dec 16 14:15:44.943890 containerd[1683]: time="2025-12-16T14:15:44.926755864Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:15:44.952929 containerd[1683]: time="2025-12-16T14:15:44.952850838Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:15:44.953285 containerd[1683]: time="2025-12-16T14:15:44.953005984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Dec 16 14:15:44.955251 containerd[1683]: time="2025-12-16T14:15:44.954916857Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:15:44.955549 containerd[1683]: time="2025-12-16T14:15:44.955208136Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 12.01601983s" Dec 16 14:15:44.966956 containerd[1683]: time="2025-12-16T14:15:44.966708718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 16 14:15:45.015218 containerd[1683]: time="2025-12-16T14:15:45.015016446Z" level=info msg="CreateContainer within sandbox \"dadb207e3569946b19681404e4b36ea88e7d16bd21e7604736deee9447fc477e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 14:15:45.080332 containerd[1683]: time="2025-12-16T14:15:45.077944144Z" level=info msg="Container a8b77ab7e81b09eb5518bbef703da90cbbc30e1a280228c1b1521e6fff65266f: CDI devices from CRI Config.CDIDevices: []" Dec 16 14:15:45.082607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount236956864.mount: Deactivated successfully. Dec 16 14:15:45.148808 containerd[1683]: time="2025-12-16T14:15:45.148747767Z" level=info msg="CreateContainer within sandbox \"dadb207e3569946b19681404e4b36ea88e7d16bd21e7604736deee9447fc477e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a8b77ab7e81b09eb5518bbef703da90cbbc30e1a280228c1b1521e6fff65266f\"" Dec 16 14:15:45.150505 containerd[1683]: time="2025-12-16T14:15:45.150078994Z" level=info msg="StartContainer for \"a8b77ab7e81b09eb5518bbef703da90cbbc30e1a280228c1b1521e6fff65266f\"" Dec 16 14:15:45.155594 containerd[1683]: time="2025-12-16T14:15:45.155473643Z" level=info msg="connecting to shim a8b77ab7e81b09eb5518bbef703da90cbbc30e1a280228c1b1521e6fff65266f" address="unix:///run/containerd/s/0fede2e378026f954fe5c6b71716da0b0db7ce1c86aca98ac7aa47a16263e756" protocol=ttrpc version=3 Dec 16 14:15:45.257755 systemd[1]: Started cri-containerd-a8b77ab7e81b09eb5518bbef703da90cbbc30e1a280228c1b1521e6fff65266f.scope - libcontainer container a8b77ab7e81b09eb5518bbef703da90cbbc30e1a280228c1b1521e6fff65266f. Dec 16 14:15:45.369380 kernel: audit: type=1334 audit(1765894545.362:585): prog-id=178 op=LOAD Dec 16 14:15:45.362000 audit: BPF prog-id=178 op=LOAD Dec 16 14:15:45.362000 audit[4008]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=3499 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:45.379146 kernel: audit: type=1300 audit(1765894545.362:585): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=3499 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:45.362000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138623737616237653831623039656235353138626265663730336461 Dec 16 14:15:45.388228 kernel: audit: type=1327 audit(1765894545.362:585): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138623737616237653831623039656235353138626265663730336461 Dec 16 14:15:45.393715 kernel: audit: type=1334 audit(1765894545.369:586): prog-id=179 op=LOAD Dec 16 14:15:45.393792 kernel: audit: type=1300 audit(1765894545.369:586): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=3499 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:45.369000 audit: BPF prog-id=179 op=LOAD Dec 16 14:15:45.369000 audit[4008]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=3499 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:45.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138623737616237653831623039656235353138626265663730336461 Dec 16 14:15:45.398917 kernel: audit: type=1327 audit(1765894545.369:586): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138623737616237653831623039656235353138626265663730336461 Dec 16 14:15:45.369000 audit: BPF prog-id=179 op=UNLOAD Dec 16 14:15:45.412248 kernel: audit: type=1334 audit(1765894545.369:587): prog-id=179 op=UNLOAD Dec 16 14:15:45.369000 audit[4008]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3499 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:45.424259 kernel: audit: type=1300 audit(1765894545.369:587): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3499 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:45.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138623737616237653831623039656235353138626265663730336461 Dec 16 14:15:45.369000 audit: BPF prog-id=178 op=UNLOAD Dec 16 14:15:45.430962 kernel: audit: type=1327 audit(1765894545.369:587): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138623737616237653831623039656235353138626265663730336461 Dec 16 14:15:45.431039 kernel: audit: type=1334 audit(1765894545.369:588): prog-id=178 op=UNLOAD Dec 16 14:15:45.369000 audit[4008]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3499 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:45.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138623737616237653831623039656235353138626265663730336461 Dec 16 14:15:45.369000 audit: BPF prog-id=180 op=LOAD Dec 16 14:15:45.369000 audit[4008]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=3499 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:45.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138623737616237653831623039656235353138626265663730336461 Dec 16 14:15:45.471867 containerd[1683]: time="2025-12-16T14:15:45.471819800Z" level=info msg="StartContainer for \"a8b77ab7e81b09eb5518bbef703da90cbbc30e1a280228c1b1521e6fff65266f\" returns successfully" Dec 16 14:15:45.855314 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 14:15:45.856695 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 14:15:46.269508 kubelet[2975]: I1216 14:15:46.269449 2975 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwxk7\" (UniqueName: \"kubernetes.io/projected/d3330c44-23de-4b0d-b566-d21abd5c2c90-kube-api-access-xwxk7\") pod \"d3330c44-23de-4b0d-b566-d21abd5c2c90\" (UID: \"d3330c44-23de-4b0d-b566-d21abd5c2c90\") " Dec 16 14:15:46.270259 kubelet[2975]: I1216 14:15:46.269519 2975 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3330c44-23de-4b0d-b566-d21abd5c2c90-whisker-ca-bundle\") pod \"d3330c44-23de-4b0d-b566-d21abd5c2c90\" (UID: \"d3330c44-23de-4b0d-b566-d21abd5c2c90\") " Dec 16 14:15:46.270259 kubelet[2975]: I1216 14:15:46.269563 2975 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d3330c44-23de-4b0d-b566-d21abd5c2c90-whisker-backend-key-pair\") pod \"d3330c44-23de-4b0d-b566-d21abd5c2c90\" (UID: \"d3330c44-23de-4b0d-b566-d21abd5c2c90\") " Dec 16 14:15:46.279220 kubelet[2975]: I1216 14:15:46.278392 2975 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3330c44-23de-4b0d-b566-d21abd5c2c90-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d3330c44-23de-4b0d-b566-d21abd5c2c90" (UID: "d3330c44-23de-4b0d-b566-d21abd5c2c90"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 14:15:46.298024 systemd[1]: var-lib-kubelet-pods-d3330c44\x2d23de\x2d4b0d\x2db566\x2dd21abd5c2c90-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxwxk7.mount: Deactivated successfully. Dec 16 14:15:46.302909 kubelet[2975]: I1216 14:15:46.301429 2975 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3330c44-23de-4b0d-b566-d21abd5c2c90-kube-api-access-xwxk7" (OuterVolumeSpecName: "kube-api-access-xwxk7") pod "d3330c44-23de-4b0d-b566-d21abd5c2c90" (UID: "d3330c44-23de-4b0d-b566-d21abd5c2c90"). InnerVolumeSpecName "kube-api-access-xwxk7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 14:15:46.317944 kubelet[2975]: I1216 14:15:46.315968 2975 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3330c44-23de-4b0d-b566-d21abd5c2c90-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d3330c44-23de-4b0d-b566-d21abd5c2c90" (UID: "d3330c44-23de-4b0d-b566-d21abd5c2c90"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 14:15:46.316902 systemd[1]: var-lib-kubelet-pods-d3330c44\x2d23de\x2d4b0d\x2db566\x2dd21abd5c2c90-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 14:15:46.370739 kubelet[2975]: I1216 14:15:46.370605 2975 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d3330c44-23de-4b0d-b566-d21abd5c2c90-whisker-backend-key-pair\") on node \"srv-6slrx.gb1.brightbox.com\" DevicePath \"\"" Dec 16 14:15:46.370739 kubelet[2975]: I1216 14:15:46.370675 2975 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xwxk7\" (UniqueName: \"kubernetes.io/projected/d3330c44-23de-4b0d-b566-d21abd5c2c90-kube-api-access-xwxk7\") on node \"srv-6slrx.gb1.brightbox.com\" DevicePath \"\"" Dec 16 14:15:46.370739 kubelet[2975]: I1216 14:15:46.370693 2975 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3330c44-23de-4b0d-b566-d21abd5c2c90-whisker-ca-bundle\") on node \"srv-6slrx.gb1.brightbox.com\" DevicePath \"\"" Dec 16 14:15:46.581003 containerd[1683]: time="2025-12-16T14:15:46.580422573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66659c8785-m7865,Uid:6e75d316-f851-4233-b053-cd9dc148b92b,Namespace:calico-apiserver,Attempt:0,}" Dec 16 14:15:46.582503 containerd[1683]: time="2025-12-16T14:15:46.580645247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x9ncs,Uid:142f5ab3-cbeb-48ba-8db9-bbe60551f068,Namespace:kube-system,Attempt:0,}" Dec 16 14:15:46.582503 containerd[1683]: time="2025-12-16T14:15:46.580691532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8c8c88d5b-gxh2p,Uid:5790375d-68f1-4555-984f-974084235d42,Namespace:calico-system,Attempt:0,}" Dec 16 14:15:46.582503 containerd[1683]: time="2025-12-16T14:15:46.580737521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tfzg7,Uid:1248d2d9-77a6-4a9d-9b93-4af871a2edbf,Namespace:calico-system,Attempt:0,}" Dec 16 14:15:46.582503 containerd[1683]: time="2025-12-16T14:15:46.580776958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2m9mr,Uid:86b288c9-63f1-4f44-8c9e-eb5a65f83789,Namespace:calico-system,Attempt:0,}" Dec 16 14:15:47.163198 systemd[1]: Removed slice kubepods-besteffort-podd3330c44_23de_4b0d_b566_d21abd5c2c90.slice - libcontainer container kubepods-besteffort-podd3330c44_23de_4b0d_b566_d21abd5c2c90.slice. Dec 16 14:15:47.226482 kubelet[2975]: I1216 14:15:47.215068 2975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nhh62" podStartSLOduration=3.606194963 podStartE2EDuration="29.206615388s" podCreationTimestamp="2025-12-16 14:15:18 +0000 UTC" firstStartedPulling="2025-12-16 14:15:19.378885568 +0000 UTC m=+28.150527365" lastFinishedPulling="2025-12-16 14:15:44.979306 +0000 UTC m=+53.750947790" observedRunningTime="2025-12-16 14:15:46.206651046 +0000 UTC m=+54.978292864" watchObservedRunningTime="2025-12-16 14:15:47.206615388 +0000 UTC m=+55.978257184" Dec 16 14:15:47.386024 kubelet[2975]: I1216 14:15:47.385978 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/32682285-0749-4677-aeaa-b30aca23774e-whisker-backend-key-pair\") pod \"whisker-7b4555d8df-7md4r\" (UID: \"32682285-0749-4677-aeaa-b30aca23774e\") " pod="calico-system/whisker-7b4555d8df-7md4r" Dec 16 14:15:47.386799 kubelet[2975]: I1216 14:15:47.386055 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32682285-0749-4677-aeaa-b30aca23774e-whisker-ca-bundle\") pod \"whisker-7b4555d8df-7md4r\" (UID: \"32682285-0749-4677-aeaa-b30aca23774e\") " pod="calico-system/whisker-7b4555d8df-7md4r" Dec 16 14:15:47.386799 kubelet[2975]: I1216 14:15:47.386134 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbf5n\" (UniqueName: \"kubernetes.io/projected/32682285-0749-4677-aeaa-b30aca23774e-kube-api-access-wbf5n\") pod \"whisker-7b4555d8df-7md4r\" (UID: \"32682285-0749-4677-aeaa-b30aca23774e\") " pod="calico-system/whisker-7b4555d8df-7md4r" Dec 16 14:15:47.397637 systemd[1]: Created slice kubepods-besteffort-pod32682285_0749_4677_aeaa_b30aca23774e.slice - libcontainer container kubepods-besteffort-pod32682285_0749_4677_aeaa_b30aca23774e.slice. Dec 16 14:15:47.464917 systemd-networkd[1572]: cali0f4ab86104a: Link UP Dec 16 14:15:47.466119 systemd-networkd[1572]: cali0f4ab86104a: Gained carrier Dec 16 14:15:47.537392 containerd[1683]: 2025-12-16 14:15:46.893 [INFO][4113] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 14:15:47.537392 containerd[1683]: 2025-12-16 14:15:46.951 [INFO][4113] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--6slrx.gb1.brightbox.com-k8s-goldmane--666569f655--2m9mr-eth0 goldmane-666569f655- calico-system 86b288c9-63f1-4f44-8c9e-eb5a65f83789 868 0 2025-12-16 14:15:16 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-6slrx.gb1.brightbox.com goldmane-666569f655-2m9mr eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0f4ab86104a [] [] }} ContainerID="5162e4d2125d67ca7d1b981df1467f0412588dc729c728a9cb164b6121ac9d50" Namespace="calico-system" Pod="goldmane-666569f655-2m9mr" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-goldmane--666569f655--2m9mr-" Dec 16 14:15:47.537392 containerd[1683]: 2025-12-16 14:15:46.951 [INFO][4113] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5162e4d2125d67ca7d1b981df1467f0412588dc729c728a9cb164b6121ac9d50" Namespace="calico-system" Pod="goldmane-666569f655-2m9mr" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-goldmane--666569f655--2m9mr-eth0" Dec 16 14:15:47.537392 containerd[1683]: 2025-12-16 14:15:47.197 [INFO][4156] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5162e4d2125d67ca7d1b981df1467f0412588dc729c728a9cb164b6121ac9d50" HandleID="k8s-pod-network.5162e4d2125d67ca7d1b981df1467f0412588dc729c728a9cb164b6121ac9d50" Workload="srv--6slrx.gb1.brightbox.com-k8s-goldmane--666569f655--2m9mr-eth0" Dec 16 14:15:47.537763 containerd[1683]: 2025-12-16 14:15:47.200 [INFO][4156] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5162e4d2125d67ca7d1b981df1467f0412588dc729c728a9cb164b6121ac9d50" HandleID="k8s-pod-network.5162e4d2125d67ca7d1b981df1467f0412588dc729c728a9cb164b6121ac9d50" Workload="srv--6slrx.gb1.brightbox.com-k8s-goldmane--666569f655--2m9mr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000311180), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-6slrx.gb1.brightbox.com", "pod":"goldmane-666569f655-2m9mr", "timestamp":"2025-12-16 14:15:47.197631909 +0000 UTC"}, Hostname:"srv-6slrx.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 14:15:47.537763 containerd[1683]: 2025-12-16 14:15:47.200 [INFO][4156] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 14:15:47.537763 containerd[1683]: 2025-12-16 14:15:47.201 [INFO][4156] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 14:15:47.537763 containerd[1683]: 2025-12-16 14:15:47.202 [INFO][4156] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-6slrx.gb1.brightbox.com' Dec 16 14:15:47.537763 containerd[1683]: 2025-12-16 14:15:47.254 [INFO][4156] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5162e4d2125d67ca7d1b981df1467f0412588dc729c728a9cb164b6121ac9d50" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:47.537763 containerd[1683]: 2025-12-16 14:15:47.286 [INFO][4156] ipam/ipam.go 394: Looking up existing affinities for host host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:47.537763 containerd[1683]: 2025-12-16 14:15:47.341 [INFO][4156] ipam/ipam.go 511: Trying affinity for 192.168.102.64/26 host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:47.537763 containerd[1683]: 2025-12-16 14:15:47.355 [INFO][4156] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.64/26 host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:47.537763 containerd[1683]: 2025-12-16 14:15:47.367 [INFO][4156] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.64/26 host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:47.540951 containerd[1683]: 2025-12-16 14:15:47.368 [INFO][4156] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.102.64/26 handle="k8s-pod-network.5162e4d2125d67ca7d1b981df1467f0412588dc729c728a9cb164b6121ac9d50" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:47.540951 containerd[1683]: 2025-12-16 14:15:47.382 [INFO][4156] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5162e4d2125d67ca7d1b981df1467f0412588dc729c728a9cb164b6121ac9d50 Dec 16 14:15:47.540951 containerd[1683]: 2025-12-16 14:15:47.407 [INFO][4156] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.102.64/26 handle="k8s-pod-network.5162e4d2125d67ca7d1b981df1467f0412588dc729c728a9cb164b6121ac9d50" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:47.540951 containerd[1683]: 2025-12-16 14:15:47.429 [INFO][4156] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.102.65/26] block=192.168.102.64/26 handle="k8s-pod-network.5162e4d2125d67ca7d1b981df1467f0412588dc729c728a9cb164b6121ac9d50" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:47.540951 containerd[1683]: 2025-12-16 14:15:47.429 [INFO][4156] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.65/26] handle="k8s-pod-network.5162e4d2125d67ca7d1b981df1467f0412588dc729c728a9cb164b6121ac9d50" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:47.540951 containerd[1683]: 2025-12-16 14:15:47.429 [INFO][4156] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 14:15:47.540951 containerd[1683]: 2025-12-16 14:15:47.429 [INFO][4156] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.102.65/26] IPv6=[] ContainerID="5162e4d2125d67ca7d1b981df1467f0412588dc729c728a9cb164b6121ac9d50" HandleID="k8s-pod-network.5162e4d2125d67ca7d1b981df1467f0412588dc729c728a9cb164b6121ac9d50" Workload="srv--6slrx.gb1.brightbox.com-k8s-goldmane--666569f655--2m9mr-eth0" Dec 16 14:15:47.545694 containerd[1683]: 2025-12-16 14:15:47.436 [INFO][4113] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5162e4d2125d67ca7d1b981df1467f0412588dc729c728a9cb164b6121ac9d50" Namespace="calico-system" Pod="goldmane-666569f655-2m9mr" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-goldmane--666569f655--2m9mr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--6slrx.gb1.brightbox.com-k8s-goldmane--666569f655--2m9mr-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"86b288c9-63f1-4f44-8c9e-eb5a65f83789", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 14, 15, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-6slrx.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-666569f655-2m9mr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.102.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0f4ab86104a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 14:15:47.545902 containerd[1683]: 2025-12-16 14:15:47.437 [INFO][4113] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.65/32] ContainerID="5162e4d2125d67ca7d1b981df1467f0412588dc729c728a9cb164b6121ac9d50" Namespace="calico-system" Pod="goldmane-666569f655-2m9mr" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-goldmane--666569f655--2m9mr-eth0" Dec 16 14:15:47.545902 containerd[1683]: 2025-12-16 14:15:47.437 [INFO][4113] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f4ab86104a ContainerID="5162e4d2125d67ca7d1b981df1467f0412588dc729c728a9cb164b6121ac9d50" Namespace="calico-system" Pod="goldmane-666569f655-2m9mr" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-goldmane--666569f655--2m9mr-eth0" Dec 16 14:15:47.545902 containerd[1683]: 2025-12-16 14:15:47.467 [INFO][4113] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5162e4d2125d67ca7d1b981df1467f0412588dc729c728a9cb164b6121ac9d50" Namespace="calico-system" Pod="goldmane-666569f655-2m9mr" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-goldmane--666569f655--2m9mr-eth0" Dec 16 14:15:47.546664 containerd[1683]: 2025-12-16 14:15:47.468 [INFO][4113] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5162e4d2125d67ca7d1b981df1467f0412588dc729c728a9cb164b6121ac9d50" Namespace="calico-system" Pod="goldmane-666569f655-2m9mr" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-goldmane--666569f655--2m9mr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--6slrx.gb1.brightbox.com-k8s-goldmane--666569f655--2m9mr-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"86b288c9-63f1-4f44-8c9e-eb5a65f83789", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 14, 15, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-6slrx.gb1.brightbox.com", ContainerID:"5162e4d2125d67ca7d1b981df1467f0412588dc729c728a9cb164b6121ac9d50", Pod:"goldmane-666569f655-2m9mr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.102.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0f4ab86104a", MAC:"da:8f:f8:e3:16:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 14:15:47.546835 containerd[1683]: 2025-12-16 14:15:47.522 [INFO][4113] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5162e4d2125d67ca7d1b981df1467f0412588dc729c728a9cb164b6121ac9d50" Namespace="calico-system" Pod="goldmane-666569f655-2m9mr" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-goldmane--666569f655--2m9mr-eth0" Dec 16 14:15:47.560333 containerd[1683]: time="2025-12-16T14:15:47.559162043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t2lk6,Uid:764f81db-5c32-41e4-8912-f297ff0e1255,Namespace:kube-system,Attempt:0,}" Dec 16 14:15:47.578723 kubelet[2975]: I1216 14:15:47.577897 2975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3330c44-23de-4b0d-b566-d21abd5c2c90" path="/var/lib/kubelet/pods/d3330c44-23de-4b0d-b566-d21abd5c2c90/volumes" Dec 16 14:15:47.656893 systemd-networkd[1572]: cali9c83f3ca113: Link UP Dec 16 14:15:47.662770 systemd-networkd[1572]: cali9c83f3ca113: Gained carrier Dec 16 14:15:47.725421 containerd[1683]: time="2025-12-16T14:15:47.724226106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b4555d8df-7md4r,Uid:32682285-0749-4677-aeaa-b30aca23774e,Namespace:calico-system,Attempt:0,}" Dec 16 14:15:47.750112 containerd[1683]: 2025-12-16 14:15:46.796 [INFO][4087] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 14:15:47.750112 containerd[1683]: 2025-12-16 14:15:46.916 [INFO][4087] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--6slrx.gb1.brightbox.com-k8s-csi--node--driver--tfzg7-eth0 csi-node-driver- calico-system 1248d2d9-77a6-4a9d-9b93-4af871a2edbf 746 0 2025-12-16 14:15:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-6slrx.gb1.brightbox.com csi-node-driver-tfzg7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9c83f3ca113 [] [] }} ContainerID="3d1e349e5643b5285953b92bcb5462261a3856bad2a65a36fcd5d747c6321999" Namespace="calico-system" Pod="csi-node-driver-tfzg7" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-csi--node--driver--tfzg7-" Dec 16 14:15:47.750112 containerd[1683]: 2025-12-16 14:15:46.916 [INFO][4087] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3d1e349e5643b5285953b92bcb5462261a3856bad2a65a36fcd5d747c6321999" Namespace="calico-system" Pod="csi-node-driver-tfzg7" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-csi--node--driver--tfzg7-eth0" Dec 16 14:15:47.750112 containerd[1683]: 2025-12-16 14:15:47.197 [INFO][4154] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3d1e349e5643b5285953b92bcb5462261a3856bad2a65a36fcd5d747c6321999" HandleID="k8s-pod-network.3d1e349e5643b5285953b92bcb5462261a3856bad2a65a36fcd5d747c6321999" Workload="srv--6slrx.gb1.brightbox.com-k8s-csi--node--driver--tfzg7-eth0" Dec 16 14:15:47.751873 containerd[1683]: 2025-12-16 14:15:47.199 [INFO][4154] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3d1e349e5643b5285953b92bcb5462261a3856bad2a65a36fcd5d747c6321999" HandleID="k8s-pod-network.3d1e349e5643b5285953b92bcb5462261a3856bad2a65a36fcd5d747c6321999" Workload="srv--6slrx.gb1.brightbox.com-k8s-csi--node--driver--tfzg7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000102c10), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-6slrx.gb1.brightbox.com", "pod":"csi-node-driver-tfzg7", "timestamp":"2025-12-16 14:15:47.197792518 +0000 UTC"}, Hostname:"srv-6slrx.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 14:15:47.751873 containerd[1683]: 2025-12-16 14:15:47.201 [INFO][4154] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 14:15:47.751873 containerd[1683]: 2025-12-16 14:15:47.429 [INFO][4154] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 14:15:47.751873 containerd[1683]: 2025-12-16 14:15:47.430 [INFO][4154] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-6slrx.gb1.brightbox.com' Dec 16 14:15:47.751873 containerd[1683]: 2025-12-16 14:15:47.470 [INFO][4154] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3d1e349e5643b5285953b92bcb5462261a3856bad2a65a36fcd5d747c6321999" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:47.751873 containerd[1683]: 2025-12-16 14:15:47.492 [INFO][4154] ipam/ipam.go 394: Looking up existing affinities for host host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:47.751873 containerd[1683]: 2025-12-16 14:15:47.534 [INFO][4154] ipam/ipam.go 511: Trying affinity for 192.168.102.64/26 host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:47.751873 containerd[1683]: 2025-12-16 14:15:47.546 [INFO][4154] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.64/26 host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:47.751873 containerd[1683]: 2025-12-16 14:15:47.551 [INFO][4154] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.64/26 host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:47.752405 containerd[1683]: 2025-12-16 14:15:47.551 [INFO][4154] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.102.64/26 handle="k8s-pod-network.3d1e349e5643b5285953b92bcb5462261a3856bad2a65a36fcd5d747c6321999" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:47.752405 containerd[1683]: 2025-12-16 14:15:47.558 [INFO][4154] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3d1e349e5643b5285953b92bcb5462261a3856bad2a65a36fcd5d747c6321999 Dec 16 14:15:47.752405 containerd[1683]: 2025-12-16 14:15:47.572 [INFO][4154] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.102.64/26 handle="k8s-pod-network.3d1e349e5643b5285953b92bcb5462261a3856bad2a65a36fcd5d747c6321999" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:47.752405 containerd[1683]: 2025-12-16 14:15:47.603 [INFO][4154] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.102.66/26] block=192.168.102.64/26 handle="k8s-pod-network.3d1e349e5643b5285953b92bcb5462261a3856bad2a65a36fcd5d747c6321999" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:47.752405 containerd[1683]: 2025-12-16 14:15:47.603 [INFO][4154] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.66/26] handle="k8s-pod-network.3d1e349e5643b5285953b92bcb5462261a3856bad2a65a36fcd5d747c6321999" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:47.752405 containerd[1683]: 2025-12-16 14:15:47.604 [INFO][4154] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 14:15:47.752405 containerd[1683]: 2025-12-16 14:15:47.604 [INFO][4154] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.102.66/26] IPv6=[] ContainerID="3d1e349e5643b5285953b92bcb5462261a3856bad2a65a36fcd5d747c6321999" HandleID="k8s-pod-network.3d1e349e5643b5285953b92bcb5462261a3856bad2a65a36fcd5d747c6321999" Workload="srv--6slrx.gb1.brightbox.com-k8s-csi--node--driver--tfzg7-eth0" Dec 16 14:15:47.752892 containerd[1683]: 2025-12-16 14:15:47.642 [INFO][4087] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3d1e349e5643b5285953b92bcb5462261a3856bad2a65a36fcd5d747c6321999" Namespace="calico-system" Pod="csi-node-driver-tfzg7" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-csi--node--driver--tfzg7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--6slrx.gb1.brightbox.com-k8s-csi--node--driver--tfzg7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1248d2d9-77a6-4a9d-9b93-4af871a2edbf", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 14, 15, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-6slrx.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-tfzg7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.102.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9c83f3ca113", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 14:15:47.753024 containerd[1683]: 2025-12-16 14:15:47.649 [INFO][4087] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.66/32] ContainerID="3d1e349e5643b5285953b92bcb5462261a3856bad2a65a36fcd5d747c6321999" Namespace="calico-system" Pod="csi-node-driver-tfzg7" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-csi--node--driver--tfzg7-eth0" Dec 16 14:15:47.753024 containerd[1683]: 2025-12-16 14:15:47.649 [INFO][4087] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9c83f3ca113 ContainerID="3d1e349e5643b5285953b92bcb5462261a3856bad2a65a36fcd5d747c6321999" Namespace="calico-system" Pod="csi-node-driver-tfzg7" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-csi--node--driver--tfzg7-eth0" Dec 16 14:15:47.753024 containerd[1683]: 2025-12-16 14:15:47.662 [INFO][4087] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3d1e349e5643b5285953b92bcb5462261a3856bad2a65a36fcd5d747c6321999" Namespace="calico-system" Pod="csi-node-driver-tfzg7" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-csi--node--driver--tfzg7-eth0" Dec 16 14:15:47.753295 containerd[1683]: 2025-12-16 14:15:47.667 [INFO][4087] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3d1e349e5643b5285953b92bcb5462261a3856bad2a65a36fcd5d747c6321999" Namespace="calico-system" Pod="csi-node-driver-tfzg7" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-csi--node--driver--tfzg7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--6slrx.gb1.brightbox.com-k8s-csi--node--driver--tfzg7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1248d2d9-77a6-4a9d-9b93-4af871a2edbf", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 14, 15, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-6slrx.gb1.brightbox.com", ContainerID:"3d1e349e5643b5285953b92bcb5462261a3856bad2a65a36fcd5d747c6321999", Pod:"csi-node-driver-tfzg7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.102.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9c83f3ca113", MAC:"3a:99:0a:cf:d5:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 14:15:47.753430 containerd[1683]: 2025-12-16 14:15:47.700 [INFO][4087] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3d1e349e5643b5285953b92bcb5462261a3856bad2a65a36fcd5d747c6321999" Namespace="calico-system" Pod="csi-node-driver-tfzg7" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-csi--node--driver--tfzg7-eth0" Dec 16 14:15:47.830313 systemd-networkd[1572]: calib5f1342c415: Link UP Dec 16 14:15:47.835870 systemd-networkd[1572]: calib5f1342c415: Gained carrier Dec 16 14:15:47.858052 containerd[1683]: time="2025-12-16T14:15:47.857985210Z" level=info msg="connecting to shim 5162e4d2125d67ca7d1b981df1467f0412588dc729c728a9cb164b6121ac9d50" address="unix:///run/containerd/s/198307fab97dc75038df60b97b7224b192f1c5c9cb1c508304463d9a6893e299" namespace=k8s.io protocol=ttrpc version=3 Dec 16 14:15:47.892471 containerd[1683]: time="2025-12-16T14:15:47.892407234Z" level=info msg="connecting to shim 3d1e349e5643b5285953b92bcb5462261a3856bad2a65a36fcd5d747c6321999" address="unix:///run/containerd/s/e4d0976c30f17263d75339ec658caef7bde92b7b3175b0e96587a7741870297d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 14:15:47.902147 containerd[1683]: 2025-12-16 14:15:46.846 [INFO][4089] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 14:15:47.902147 containerd[1683]: 2025-12-16 14:15:46.944 [INFO][4089] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--6slrx.gb1.brightbox.com-k8s-calico--kube--controllers--8c8c88d5b--gxh2p-eth0 calico-kube-controllers-8c8c88d5b- calico-system 5790375d-68f1-4555-984f-974084235d42 869 0 2025-12-16 14:15:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8c8c88d5b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-6slrx.gb1.brightbox.com calico-kube-controllers-8c8c88d5b-gxh2p eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib5f1342c415 [] [] }} ContainerID="b85c92717a056ec4dc01cc40c34e3b4300b61eb4eaf611ce3fe6702ebe3bc0ef" Namespace="calico-system" Pod="calico-kube-controllers-8c8c88d5b-gxh2p" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-calico--kube--controllers--8c8c88d5b--gxh2p-" Dec 16 14:15:47.902147 containerd[1683]: 2025-12-16 14:15:46.947 [INFO][4089] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b85c92717a056ec4dc01cc40c34e3b4300b61eb4eaf611ce3fe6702ebe3bc0ef" Namespace="calico-system" Pod="calico-kube-controllers-8c8c88d5b-gxh2p" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-calico--kube--controllers--8c8c88d5b--gxh2p-eth0" Dec 16 14:15:47.902147 containerd[1683]: 2025-12-16 14:15:47.198 [INFO][4160] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b85c92717a056ec4dc01cc40c34e3b4300b61eb4eaf611ce3fe6702ebe3bc0ef" HandleID="k8s-pod-network.b85c92717a056ec4dc01cc40c34e3b4300b61eb4eaf611ce3fe6702ebe3bc0ef" Workload="srv--6slrx.gb1.brightbox.com-k8s-calico--kube--controllers--8c8c88d5b--gxh2p-eth0" Dec 16 14:15:47.902478 containerd[1683]: 2025-12-16 14:15:47.204 [INFO][4160] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b85c92717a056ec4dc01cc40c34e3b4300b61eb4eaf611ce3fe6702ebe3bc0ef" HandleID="k8s-pod-network.b85c92717a056ec4dc01cc40c34e3b4300b61eb4eaf611ce3fe6702ebe3bc0ef" Workload="srv--6slrx.gb1.brightbox.com-k8s-calico--kube--controllers--8c8c88d5b--gxh2p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031c2d0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-6slrx.gb1.brightbox.com", "pod":"calico-kube-controllers-8c8c88d5b-gxh2p", "timestamp":"2025-12-16 14:15:47.198667248 +0000 UTC"}, Hostname:"srv-6slrx.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 14:15:47.902478 containerd[1683]: 2025-12-16 14:15:47.205 [INFO][4160] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 14:15:47.902478 containerd[1683]: 2025-12-16 14:15:47.604 [INFO][4160] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 14:15:47.902478 containerd[1683]: 2025-12-16 14:15:47.605 [INFO][4160] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-6slrx.gb1.brightbox.com' Dec 16 14:15:47.902478 containerd[1683]: 2025-12-16 14:15:47.645 [INFO][4160] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b85c92717a056ec4dc01cc40c34e3b4300b61eb4eaf611ce3fe6702ebe3bc0ef" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:47.902478 containerd[1683]: 2025-12-16 14:15:47.679 [INFO][4160] ipam/ipam.go 394: Looking up existing affinities for host host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:47.902478 containerd[1683]: 2025-12-16 14:15:47.728 [INFO][4160] ipam/ipam.go 511: Trying affinity for 192.168.102.64/26 host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:47.902478 containerd[1683]: 2025-12-16 14:15:47.735 [INFO][4160] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.64/26 host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:47.902478 containerd[1683]: 2025-12-16 14:15:47.745 [INFO][4160] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.64/26 host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:47.902830 containerd[1683]: 2025-12-16 14:15:47.745 [INFO][4160] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.102.64/26 handle="k8s-pod-network.b85c92717a056ec4dc01cc40c34e3b4300b61eb4eaf611ce3fe6702ebe3bc0ef" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:47.902830 containerd[1683]: 2025-12-16 14:15:47.750 [INFO][4160] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b85c92717a056ec4dc01cc40c34e3b4300b61eb4eaf611ce3fe6702ebe3bc0ef Dec 16 14:15:47.902830 containerd[1683]: 2025-12-16 14:15:47.766 [INFO][4160] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.102.64/26 handle="k8s-pod-network.b85c92717a056ec4dc01cc40c34e3b4300b61eb4eaf611ce3fe6702ebe3bc0ef" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:47.902830 containerd[1683]: 2025-12-16 14:15:47.784 [INFO][4160] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.102.67/26] block=192.168.102.64/26 handle="k8s-pod-network.b85c92717a056ec4dc01cc40c34e3b4300b61eb4eaf611ce3fe6702ebe3bc0ef" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:47.902830 containerd[1683]: 2025-12-16 14:15:47.787 [INFO][4160] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.67/26] handle="k8s-pod-network.b85c92717a056ec4dc01cc40c34e3b4300b61eb4eaf611ce3fe6702ebe3bc0ef" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:47.902830 containerd[1683]: 2025-12-16 14:15:47.787 [INFO][4160] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 14:15:47.902830 containerd[1683]: 2025-12-16 14:15:47.788 [INFO][4160] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.102.67/26] IPv6=[] ContainerID="b85c92717a056ec4dc01cc40c34e3b4300b61eb4eaf611ce3fe6702ebe3bc0ef" HandleID="k8s-pod-network.b85c92717a056ec4dc01cc40c34e3b4300b61eb4eaf611ce3fe6702ebe3bc0ef" Workload="srv--6slrx.gb1.brightbox.com-k8s-calico--kube--controllers--8c8c88d5b--gxh2p-eth0" Dec 16 14:15:47.903910 containerd[1683]: 2025-12-16 14:15:47.809 [INFO][4089] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b85c92717a056ec4dc01cc40c34e3b4300b61eb4eaf611ce3fe6702ebe3bc0ef" Namespace="calico-system" Pod="calico-kube-controllers-8c8c88d5b-gxh2p" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-calico--kube--controllers--8c8c88d5b--gxh2p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--6slrx.gb1.brightbox.com-k8s-calico--kube--controllers--8c8c88d5b--gxh2p-eth0", GenerateName:"calico-kube-controllers-8c8c88d5b-", Namespace:"calico-system", SelfLink:"", UID:"5790375d-68f1-4555-984f-974084235d42", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 14, 15, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8c8c88d5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-6slrx.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-8c8c88d5b-gxh2p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.102.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib5f1342c415", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 14:15:47.904199 containerd[1683]: 2025-12-16 14:15:47.810 [INFO][4089] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.67/32] ContainerID="b85c92717a056ec4dc01cc40c34e3b4300b61eb4eaf611ce3fe6702ebe3bc0ef" Namespace="calico-system" Pod="calico-kube-controllers-8c8c88d5b-gxh2p" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-calico--kube--controllers--8c8c88d5b--gxh2p-eth0" Dec 16 14:15:47.904199 containerd[1683]: 2025-12-16 14:15:47.810 [INFO][4089] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib5f1342c415 ContainerID="b85c92717a056ec4dc01cc40c34e3b4300b61eb4eaf611ce3fe6702ebe3bc0ef" Namespace="calico-system" Pod="calico-kube-controllers-8c8c88d5b-gxh2p" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-calico--kube--controllers--8c8c88d5b--gxh2p-eth0" Dec 16 14:15:47.904199 containerd[1683]: 2025-12-16 14:15:47.839 [INFO][4089] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b85c92717a056ec4dc01cc40c34e3b4300b61eb4eaf611ce3fe6702ebe3bc0ef" Namespace="calico-system" Pod="calico-kube-controllers-8c8c88d5b-gxh2p" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-calico--kube--controllers--8c8c88d5b--gxh2p-eth0" Dec 16 14:15:47.904347 containerd[1683]: 2025-12-16 14:15:47.845 [INFO][4089] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b85c92717a056ec4dc01cc40c34e3b4300b61eb4eaf611ce3fe6702ebe3bc0ef" Namespace="calico-system" Pod="calico-kube-controllers-8c8c88d5b-gxh2p" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-calico--kube--controllers--8c8c88d5b--gxh2p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--6slrx.gb1.brightbox.com-k8s-calico--kube--controllers--8c8c88d5b--gxh2p-eth0", GenerateName:"calico-kube-controllers-8c8c88d5b-", Namespace:"calico-system", SelfLink:"", UID:"5790375d-68f1-4555-984f-974084235d42", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 14, 15, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8c8c88d5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-6slrx.gb1.brightbox.com", ContainerID:"b85c92717a056ec4dc01cc40c34e3b4300b61eb4eaf611ce3fe6702ebe3bc0ef", Pod:"calico-kube-controllers-8c8c88d5b-gxh2p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.102.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib5f1342c415", MAC:"a2:32:fc:55:9d:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 14:15:47.904643 containerd[1683]: 2025-12-16 14:15:47.879 [INFO][4089] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b85c92717a056ec4dc01cc40c34e3b4300b61eb4eaf611ce3fe6702ebe3bc0ef" Namespace="calico-system" Pod="calico-kube-controllers-8c8c88d5b-gxh2p" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-calico--kube--controllers--8c8c88d5b--gxh2p-eth0" Dec 16 14:15:47.963473 systemd[1]: Started cri-containerd-3d1e349e5643b5285953b92bcb5462261a3856bad2a65a36fcd5d747c6321999.scope - libcontainer container 3d1e349e5643b5285953b92bcb5462261a3856bad2a65a36fcd5d747c6321999. Dec 16 14:15:47.987102 systemd-networkd[1572]: cali734e82f2e8f: Link UP Dec 16 14:15:47.989377 systemd-networkd[1572]: cali734e82f2e8f: Gained carrier Dec 16 14:15:48.026559 containerd[1683]: 2025-12-16 14:15:46.794 [INFO][4086] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 14:15:48.026559 containerd[1683]: 2025-12-16 14:15:46.904 [INFO][4086] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--6slrx.gb1.brightbox.com-k8s-coredns--674b8bbfcf--x9ncs-eth0 coredns-674b8bbfcf- kube-system 142f5ab3-cbeb-48ba-8db9-bbe60551f068 866 0 2025-12-16 14:14:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-6slrx.gb1.brightbox.com coredns-674b8bbfcf-x9ncs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali734e82f2e8f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="dbbd2d695f54afda7f1d08f9bc8e0f32a7b81582f14bb6d4bb8570067e8c0a90" Namespace="kube-system" Pod="coredns-674b8bbfcf-x9ncs" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-coredns--674b8bbfcf--x9ncs-" Dec 16 14:15:48.026559 containerd[1683]: 2025-12-16 14:15:46.904 [INFO][4086] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dbbd2d695f54afda7f1d08f9bc8e0f32a7b81582f14bb6d4bb8570067e8c0a90" Namespace="kube-system" Pod="coredns-674b8bbfcf-x9ncs" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-coredns--674b8bbfcf--x9ncs-eth0" Dec 16 14:15:48.026559 containerd[1683]: 2025-12-16 14:15:47.223 [INFO][4148] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dbbd2d695f54afda7f1d08f9bc8e0f32a7b81582f14bb6d4bb8570067e8c0a90" HandleID="k8s-pod-network.dbbd2d695f54afda7f1d08f9bc8e0f32a7b81582f14bb6d4bb8570067e8c0a90" Workload="srv--6slrx.gb1.brightbox.com-k8s-coredns--674b8bbfcf--x9ncs-eth0" Dec 16 14:15:48.027100 containerd[1683]: 2025-12-16 14:15:47.223 [INFO][4148] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dbbd2d695f54afda7f1d08f9bc8e0f32a7b81582f14bb6d4bb8570067e8c0a90" HandleID="k8s-pod-network.dbbd2d695f54afda7f1d08f9bc8e0f32a7b81582f14bb6d4bb8570067e8c0a90" Workload="srv--6slrx.gb1.brightbox.com-k8s-coredns--674b8bbfcf--x9ncs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7810), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-6slrx.gb1.brightbox.com", "pod":"coredns-674b8bbfcf-x9ncs", "timestamp":"2025-12-16 14:15:47.223522895 +0000 UTC"}, Hostname:"srv-6slrx.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 14:15:48.027100 containerd[1683]: 2025-12-16 14:15:47.223 [INFO][4148] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 14:15:48.027100 containerd[1683]: 2025-12-16 14:15:47.789 [INFO][4148] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 14:15:48.027100 containerd[1683]: 2025-12-16 14:15:47.789 [INFO][4148] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-6slrx.gb1.brightbox.com' Dec 16 14:15:48.027100 containerd[1683]: 2025-12-16 14:15:47.822 [INFO][4148] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dbbd2d695f54afda7f1d08f9bc8e0f32a7b81582f14bb6d4bb8570067e8c0a90" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.027100 containerd[1683]: 2025-12-16 14:15:47.857 [INFO][4148] ipam/ipam.go 394: Looking up existing affinities for host host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.027100 containerd[1683]: 2025-12-16 14:15:47.881 [INFO][4148] ipam/ipam.go 511: Trying affinity for 192.168.102.64/26 host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.027100 containerd[1683]: 2025-12-16 14:15:47.895 [INFO][4148] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.64/26 host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.027100 containerd[1683]: 2025-12-16 14:15:47.904 [INFO][4148] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.64/26 host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.028055 containerd[1683]: 2025-12-16 14:15:47.904 [INFO][4148] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.102.64/26 handle="k8s-pod-network.dbbd2d695f54afda7f1d08f9bc8e0f32a7b81582f14bb6d4bb8570067e8c0a90" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.028055 containerd[1683]: 2025-12-16 14:15:47.914 [INFO][4148] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dbbd2d695f54afda7f1d08f9bc8e0f32a7b81582f14bb6d4bb8570067e8c0a90 Dec 16 14:15:48.028055 containerd[1683]: 2025-12-16 14:15:47.923 [INFO][4148] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.102.64/26 handle="k8s-pod-network.dbbd2d695f54afda7f1d08f9bc8e0f32a7b81582f14bb6d4bb8570067e8c0a90" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.028055 containerd[1683]: 2025-12-16 14:15:47.943 [INFO][4148] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.102.68/26] block=192.168.102.64/26 handle="k8s-pod-network.dbbd2d695f54afda7f1d08f9bc8e0f32a7b81582f14bb6d4bb8570067e8c0a90" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.028055 containerd[1683]: 2025-12-16 14:15:47.943 [INFO][4148] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.68/26] handle="k8s-pod-network.dbbd2d695f54afda7f1d08f9bc8e0f32a7b81582f14bb6d4bb8570067e8c0a90" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.028055 containerd[1683]: 2025-12-16 14:15:47.943 [INFO][4148] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 14:15:48.028055 containerd[1683]: 2025-12-16 14:15:47.943 [INFO][4148] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.102.68/26] IPv6=[] ContainerID="dbbd2d695f54afda7f1d08f9bc8e0f32a7b81582f14bb6d4bb8570067e8c0a90" HandleID="k8s-pod-network.dbbd2d695f54afda7f1d08f9bc8e0f32a7b81582f14bb6d4bb8570067e8c0a90" Workload="srv--6slrx.gb1.brightbox.com-k8s-coredns--674b8bbfcf--x9ncs-eth0" Dec 16 14:15:48.029113 containerd[1683]: 2025-12-16 14:15:47.976 [INFO][4086] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dbbd2d695f54afda7f1d08f9bc8e0f32a7b81582f14bb6d4bb8570067e8c0a90" Namespace="kube-system" Pod="coredns-674b8bbfcf-x9ncs" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-coredns--674b8bbfcf--x9ncs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--6slrx.gb1.brightbox.com-k8s-coredns--674b8bbfcf--x9ncs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"142f5ab3-cbeb-48ba-8db9-bbe60551f068", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 14, 14, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-6slrx.gb1.brightbox.com", ContainerID:"", Pod:"coredns-674b8bbfcf-x9ncs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.102.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali734e82f2e8f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 14:15:48.029113 containerd[1683]: 2025-12-16 14:15:47.976 [INFO][4086] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.68/32] ContainerID="dbbd2d695f54afda7f1d08f9bc8e0f32a7b81582f14bb6d4bb8570067e8c0a90" Namespace="kube-system" Pod="coredns-674b8bbfcf-x9ncs" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-coredns--674b8bbfcf--x9ncs-eth0" Dec 16 14:15:48.029113 containerd[1683]: 2025-12-16 14:15:47.976 [INFO][4086] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali734e82f2e8f ContainerID="dbbd2d695f54afda7f1d08f9bc8e0f32a7b81582f14bb6d4bb8570067e8c0a90" Namespace="kube-system" Pod="coredns-674b8bbfcf-x9ncs" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-coredns--674b8bbfcf--x9ncs-eth0" Dec 16 14:15:48.029113 containerd[1683]: 2025-12-16 14:15:47.981 [INFO][4086] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dbbd2d695f54afda7f1d08f9bc8e0f32a7b81582f14bb6d4bb8570067e8c0a90" Namespace="kube-system" Pod="coredns-674b8bbfcf-x9ncs" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-coredns--674b8bbfcf--x9ncs-eth0" Dec 16 14:15:48.029113 containerd[1683]: 2025-12-16 14:15:47.982 [INFO][4086] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dbbd2d695f54afda7f1d08f9bc8e0f32a7b81582f14bb6d4bb8570067e8c0a90" Namespace="kube-system" Pod="coredns-674b8bbfcf-x9ncs" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-coredns--674b8bbfcf--x9ncs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--6slrx.gb1.brightbox.com-k8s-coredns--674b8bbfcf--x9ncs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"142f5ab3-cbeb-48ba-8db9-bbe60551f068", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 14, 14, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-6slrx.gb1.brightbox.com", ContainerID:"dbbd2d695f54afda7f1d08f9bc8e0f32a7b81582f14bb6d4bb8570067e8c0a90", Pod:"coredns-674b8bbfcf-x9ncs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.102.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali734e82f2e8f", MAC:"ba:17:01:fe:e3:fb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 14:15:48.029113 containerd[1683]: 2025-12-16 14:15:48.017 [INFO][4086] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dbbd2d695f54afda7f1d08f9bc8e0f32a7b81582f14bb6d4bb8570067e8c0a90" Namespace="kube-system" Pod="coredns-674b8bbfcf-x9ncs" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-coredns--674b8bbfcf--x9ncs-eth0" Dec 16 14:15:48.089120 containerd[1683]: time="2025-12-16T14:15:48.088969597Z" level=info msg="connecting to shim b85c92717a056ec4dc01cc40c34e3b4300b61eb4eaf611ce3fe6702ebe3bc0ef" address="unix:///run/containerd/s/9ba00c02e62737f9468e60f596b4399f9016b7948af8c1231cfbab8798571fa6" namespace=k8s.io protocol=ttrpc version=3 Dec 16 14:15:48.131607 systemd[1]: Started cri-containerd-5162e4d2125d67ca7d1b981df1467f0412588dc729c728a9cb164b6121ac9d50.scope - libcontainer container 5162e4d2125d67ca7d1b981df1467f0412588dc729c728a9cb164b6121ac9d50. Dec 16 14:15:48.187147 containerd[1683]: time="2025-12-16T14:15:48.186933458Z" level=info msg="connecting to shim dbbd2d695f54afda7f1d08f9bc8e0f32a7b81582f14bb6d4bb8570067e8c0a90" address="unix:///run/containerd/s/e1e339478efae7b001948741f7a828492aab2b1e71801b04445472e71bfa4ac0" namespace=k8s.io protocol=ttrpc version=3 Dec 16 14:15:48.199000 audit: BPF prog-id=181 op=LOAD Dec 16 14:15:48.206000 audit: BPF prog-id=182 op=LOAD Dec 16 14:15:48.206000 audit[4299]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4282 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:48.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364316533343965353634336235323835393533623932626362353436 Dec 16 14:15:48.208000 audit: BPF prog-id=182 op=UNLOAD Dec 16 14:15:48.208000 audit[4299]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4282 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:48.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364316533343965353634336235323835393533623932626362353436 Dec 16 14:15:48.210000 audit: BPF prog-id=183 op=LOAD Dec 16 14:15:48.210000 audit[4299]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4282 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:48.210000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364316533343965353634336235323835393533623932626362353436 Dec 16 14:15:48.215000 audit: BPF prog-id=184 op=LOAD Dec 16 14:15:48.215000 audit[4299]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4282 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:48.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364316533343965353634336235323835393533623932626362353436 Dec 16 14:15:48.215000 audit: BPF prog-id=184 op=UNLOAD Dec 16 14:15:48.215000 audit[4299]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4282 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:48.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364316533343965353634336235323835393533623932626362353436 Dec 16 14:15:48.215000 audit: BPF prog-id=183 op=UNLOAD Dec 16 14:15:48.215000 audit[4299]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4282 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:48.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364316533343965353634336235323835393533623932626362353436 Dec 16 14:15:48.215000 audit: BPF prog-id=185 op=LOAD Dec 16 14:15:48.215000 audit[4299]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4282 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:48.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364316533343965353634336235323835393533623932626362353436 Dec 16 14:15:48.249334 systemd-networkd[1572]: cali31e0b241fc3: Link UP Dec 16 14:15:48.259850 systemd-networkd[1572]: cali31e0b241fc3: Gained carrier Dec 16 14:15:48.315000 audit: BPF prog-id=186 op=LOAD Dec 16 14:15:48.323000 audit: BPF prog-id=187 op=LOAD Dec 16 14:15:48.323000 audit[4326]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4267 pid=4326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:48.323000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531363265346432313235643637636137643162393831646631343637 Dec 16 14:15:48.323000 audit: BPF prog-id=187 op=UNLOAD Dec 16 14:15:48.323000 audit[4326]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4267 pid=4326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:48.323000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531363265346432313235643637636137643162393831646631343637 Dec 16 14:15:48.326000 audit: BPF prog-id=188 op=LOAD Dec 16 14:15:48.326000 audit[4326]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4267 pid=4326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:48.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531363265346432313235643637636137643162393831646631343637 Dec 16 14:15:48.327000 audit: BPF prog-id=189 op=LOAD Dec 16 14:15:48.327000 audit[4326]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4267 pid=4326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:48.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531363265346432313235643637636137643162393831646631343637 Dec 16 14:15:48.329000 audit: BPF prog-id=189 op=UNLOAD Dec 16 14:15:48.329000 audit[4326]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4267 pid=4326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:48.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531363265346432313235643637636137643162393831646631343637 Dec 16 14:15:48.329000 audit: BPF prog-id=188 op=UNLOAD Dec 16 14:15:48.329000 audit[4326]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4267 pid=4326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:48.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531363265346432313235643637636137643162393831646631343637 Dec 16 14:15:48.329000 audit: BPF prog-id=190 op=LOAD Dec 16 14:15:48.329000 audit[4326]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4267 pid=4326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:48.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531363265346432313235643637636137643162393831646631343637 Dec 16 14:15:48.377139 containerd[1683]: 2025-12-16 14:15:46.898 [INFO][4096] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 14:15:48.377139 containerd[1683]: 2025-12-16 14:15:46.970 [INFO][4096] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--6slrx.gb1.brightbox.com-k8s-calico--apiserver--66659c8785--m7865-eth0 calico-apiserver-66659c8785- calico-apiserver 6e75d316-f851-4233-b053-cd9dc148b92b 864 0 2025-12-16 14:15:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66659c8785 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-6slrx.gb1.brightbox.com calico-apiserver-66659c8785-m7865 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali31e0b241fc3 [] [] }} ContainerID="c5363e1a22efd9c0d213a4737211a0da13eaf58d1fc17d3cf694f0b2fde2c953" Namespace="calico-apiserver" Pod="calico-apiserver-66659c8785-m7865" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-calico--apiserver--66659c8785--m7865-" Dec 16 14:15:48.377139 containerd[1683]: 2025-12-16 14:15:46.975 [INFO][4096] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c5363e1a22efd9c0d213a4737211a0da13eaf58d1fc17d3cf694f0b2fde2c953" Namespace="calico-apiserver" Pod="calico-apiserver-66659c8785-m7865" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-calico--apiserver--66659c8785--m7865-eth0" Dec 16 14:15:48.377139 containerd[1683]: 2025-12-16 14:15:47.243 [INFO][4175] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c5363e1a22efd9c0d213a4737211a0da13eaf58d1fc17d3cf694f0b2fde2c953" HandleID="k8s-pod-network.c5363e1a22efd9c0d213a4737211a0da13eaf58d1fc17d3cf694f0b2fde2c953" Workload="srv--6slrx.gb1.brightbox.com-k8s-calico--apiserver--66659c8785--m7865-eth0" Dec 16 14:15:48.377139 containerd[1683]: 2025-12-16 14:15:47.243 [INFO][4175] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c5363e1a22efd9c0d213a4737211a0da13eaf58d1fc17d3cf694f0b2fde2c953" HandleID="k8s-pod-network.c5363e1a22efd9c0d213a4737211a0da13eaf58d1fc17d3cf694f0b2fde2c953" Workload="srv--6slrx.gb1.brightbox.com-k8s-calico--apiserver--66659c8785--m7865-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030e0d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-6slrx.gb1.brightbox.com", "pod":"calico-apiserver-66659c8785-m7865", "timestamp":"2025-12-16 14:15:47.243389812 +0000 UTC"}, Hostname:"srv-6slrx.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 14:15:48.377139 containerd[1683]: 2025-12-16 14:15:47.243 [INFO][4175] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 14:15:48.377139 containerd[1683]: 2025-12-16 14:15:47.943 [INFO][4175] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 14:15:48.377139 containerd[1683]: 2025-12-16 14:15:47.947 [INFO][4175] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-6slrx.gb1.brightbox.com' Dec 16 14:15:48.377139 containerd[1683]: 2025-12-16 14:15:48.012 [INFO][4175] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c5363e1a22efd9c0d213a4737211a0da13eaf58d1fc17d3cf694f0b2fde2c953" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.377139 containerd[1683]: 2025-12-16 14:15:48.057 [INFO][4175] ipam/ipam.go 394: Looking up existing affinities for host host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.377139 containerd[1683]: 2025-12-16 14:15:48.084 [INFO][4175] ipam/ipam.go 511: Trying affinity for 192.168.102.64/26 host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.377139 containerd[1683]: 2025-12-16 14:15:48.093 [INFO][4175] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.64/26 host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.377139 containerd[1683]: 2025-12-16 14:15:48.104 [INFO][4175] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.64/26 host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.377139 containerd[1683]: 2025-12-16 14:15:48.106 [INFO][4175] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.102.64/26 handle="k8s-pod-network.c5363e1a22efd9c0d213a4737211a0da13eaf58d1fc17d3cf694f0b2fde2c953" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.377139 containerd[1683]: 2025-12-16 14:15:48.119 [INFO][4175] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c5363e1a22efd9c0d213a4737211a0da13eaf58d1fc17d3cf694f0b2fde2c953 Dec 16 14:15:48.377139 containerd[1683]: 2025-12-16 14:15:48.154 [INFO][4175] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.102.64/26 handle="k8s-pod-network.c5363e1a22efd9c0d213a4737211a0da13eaf58d1fc17d3cf694f0b2fde2c953" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.377139 containerd[1683]: 2025-12-16 14:15:48.197 [INFO][4175] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.102.69/26] block=192.168.102.64/26 handle="k8s-pod-network.c5363e1a22efd9c0d213a4737211a0da13eaf58d1fc17d3cf694f0b2fde2c953" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.377139 containerd[1683]: 2025-12-16 14:15:48.198 [INFO][4175] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.69/26] handle="k8s-pod-network.c5363e1a22efd9c0d213a4737211a0da13eaf58d1fc17d3cf694f0b2fde2c953" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.377139 containerd[1683]: 2025-12-16 14:15:48.198 [INFO][4175] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 14:15:48.377139 containerd[1683]: 2025-12-16 14:15:48.198 [INFO][4175] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.102.69/26] IPv6=[] ContainerID="c5363e1a22efd9c0d213a4737211a0da13eaf58d1fc17d3cf694f0b2fde2c953" HandleID="k8s-pod-network.c5363e1a22efd9c0d213a4737211a0da13eaf58d1fc17d3cf694f0b2fde2c953" Workload="srv--6slrx.gb1.brightbox.com-k8s-calico--apiserver--66659c8785--m7865-eth0" Dec 16 14:15:48.380556 containerd[1683]: 2025-12-16 14:15:48.221 [INFO][4096] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c5363e1a22efd9c0d213a4737211a0da13eaf58d1fc17d3cf694f0b2fde2c953" Namespace="calico-apiserver" Pod="calico-apiserver-66659c8785-m7865" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-calico--apiserver--66659c8785--m7865-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--6slrx.gb1.brightbox.com-k8s-calico--apiserver--66659c8785--m7865-eth0", GenerateName:"calico-apiserver-66659c8785-", Namespace:"calico-apiserver", SelfLink:"", UID:"6e75d316-f851-4233-b053-cd9dc148b92b", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 14, 15, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66659c8785", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-6slrx.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-66659c8785-m7865", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali31e0b241fc3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 14:15:48.380556 containerd[1683]: 2025-12-16 14:15:48.221 [INFO][4096] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.69/32] ContainerID="c5363e1a22efd9c0d213a4737211a0da13eaf58d1fc17d3cf694f0b2fde2c953" Namespace="calico-apiserver" Pod="calico-apiserver-66659c8785-m7865" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-calico--apiserver--66659c8785--m7865-eth0" Dec 16 14:15:48.380556 containerd[1683]: 2025-12-16 14:15:48.221 [INFO][4096] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali31e0b241fc3 ContainerID="c5363e1a22efd9c0d213a4737211a0da13eaf58d1fc17d3cf694f0b2fde2c953" Namespace="calico-apiserver" Pod="calico-apiserver-66659c8785-m7865" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-calico--apiserver--66659c8785--m7865-eth0" Dec 16 14:15:48.380556 containerd[1683]: 2025-12-16 14:15:48.293 [INFO][4096] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c5363e1a22efd9c0d213a4737211a0da13eaf58d1fc17d3cf694f0b2fde2c953" Namespace="calico-apiserver" Pod="calico-apiserver-66659c8785-m7865" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-calico--apiserver--66659c8785--m7865-eth0" Dec 16 14:15:48.380556 containerd[1683]: 2025-12-16 14:15:48.296 [INFO][4096] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c5363e1a22efd9c0d213a4737211a0da13eaf58d1fc17d3cf694f0b2fde2c953" Namespace="calico-apiserver" Pod="calico-apiserver-66659c8785-m7865" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-calico--apiserver--66659c8785--m7865-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--6slrx.gb1.brightbox.com-k8s-calico--apiserver--66659c8785--m7865-eth0", GenerateName:"calico-apiserver-66659c8785-", Namespace:"calico-apiserver", SelfLink:"", UID:"6e75d316-f851-4233-b053-cd9dc148b92b", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 14, 15, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66659c8785", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-6slrx.gb1.brightbox.com", ContainerID:"c5363e1a22efd9c0d213a4737211a0da13eaf58d1fc17d3cf694f0b2fde2c953", Pod:"calico-apiserver-66659c8785-m7865", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali31e0b241fc3", MAC:"ca:c0:fd:fe:01:e5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 14:15:48.380556 containerd[1683]: 2025-12-16 14:15:48.350 [INFO][4096] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c5363e1a22efd9c0d213a4737211a0da13eaf58d1fc17d3cf694f0b2fde2c953" Namespace="calico-apiserver" Pod="calico-apiserver-66659c8785-m7865" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-calico--apiserver--66659c8785--m7865-eth0" Dec 16 14:15:48.411615 systemd[1]: Started cri-containerd-dbbd2d695f54afda7f1d08f9bc8e0f32a7b81582f14bb6d4bb8570067e8c0a90.scope - libcontainer container dbbd2d695f54afda7f1d08f9bc8e0f32a7b81582f14bb6d4bb8570067e8c0a90. Dec 16 14:15:48.428504 systemd[1]: Started cri-containerd-b85c92717a056ec4dc01cc40c34e3b4300b61eb4eaf611ce3fe6702ebe3bc0ef.scope - libcontainer container b85c92717a056ec4dc01cc40c34e3b4300b61eb4eaf611ce3fe6702ebe3bc0ef. Dec 16 14:15:48.462000 audit: BPF prog-id=191 op=LOAD Dec 16 14:15:48.464000 audit: BPF prog-id=192 op=LOAD Dec 16 14:15:48.464000 audit[4426]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fe238 a2=98 a3=0 items=0 ppid=4407 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:48.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462626432643639356635346166646137663164303866396263386530 Dec 16 14:15:48.465000 audit: BPF prog-id=192 op=UNLOAD Dec 16 14:15:48.465000 audit[4426]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4407 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:48.465000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462626432643639356635346166646137663164303866396263386530 Dec 16 14:15:48.465000 audit: BPF prog-id=193 op=LOAD Dec 16 14:15:48.465000 audit[4426]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fe488 a2=98 a3=0 items=0 ppid=4407 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:48.465000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462626432643639356635346166646137663164303866396263386530 Dec 16 14:15:48.465000 audit: BPF prog-id=194 op=LOAD Dec 16 14:15:48.465000 audit[4426]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001fe218 a2=98 a3=0 items=0 ppid=4407 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:48.465000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462626432643639356635346166646137663164303866396263386530 Dec 16 14:15:48.465000 audit: BPF prog-id=194 op=UNLOAD Dec 16 14:15:48.465000 audit[4426]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4407 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:48.465000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462626432643639356635346166646137663164303866396263386530 Dec 16 14:15:48.465000 audit: BPF prog-id=193 op=UNLOAD Dec 16 14:15:48.465000 audit[4426]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4407 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:48.465000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462626432643639356635346166646137663164303866396263386530 Dec 16 14:15:48.466000 audit: BPF prog-id=195 op=LOAD Dec 16 14:15:48.466000 audit[4426]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fe6e8 a2=98 a3=0 items=0 ppid=4407 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:48.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462626432643639356635346166646137663164303866396263386530 Dec 16 14:15:48.480332 containerd[1683]: time="2025-12-16T14:15:48.480180230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tfzg7,Uid:1248d2d9-77a6-4a9d-9b93-4af871a2edbf,Namespace:calico-system,Attempt:0,} returns sandbox id \"3d1e349e5643b5285953b92bcb5462261a3856bad2a65a36fcd5d747c6321999\"" Dec 16 14:15:48.499034 containerd[1683]: time="2025-12-16T14:15:48.498593941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 14:15:48.526425 systemd-networkd[1572]: cali0f4ab86104a: Gained IPv6LL Dec 16 14:15:48.552777 containerd[1683]: time="2025-12-16T14:15:48.552715272Z" level=info msg="connecting to shim c5363e1a22efd9c0d213a4737211a0da13eaf58d1fc17d3cf694f0b2fde2c953" address="unix:///run/containerd/s/bda2404029f5ef140edbc80c08b394c5688013529335a81ca230fa722e61109f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 14:15:48.578309 containerd[1683]: time="2025-12-16T14:15:48.578259778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66659c8785-q7tzf,Uid:8ce9819e-d54a-4fa9-97c3-d8a0be2ecea0,Namespace:calico-apiserver,Attempt:0,}" Dec 16 14:15:48.668000 audit: BPF prog-id=196 op=LOAD Dec 16 14:15:48.672000 audit: BPF prog-id=197 op=LOAD Dec 16 14:15:48.673346 systemd-networkd[1572]: cali259883134f3: Link UP Dec 16 14:15:48.672000 audit[4403]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c238 a2=98 a3=0 items=0 ppid=4360 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:48.672000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238356339323731376130353665633464633031636334306333346533 Dec 16 14:15:48.676000 audit: BPF prog-id=197 op=UNLOAD Dec 16 14:15:48.676000 audit[4403]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4360 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:48.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238356339323731376130353665633464633031636334306333346533 Dec 16 14:15:48.678846 systemd-networkd[1572]: cali259883134f3: Gained carrier Dec 16 14:15:48.681000 audit: BPF prog-id=198 op=LOAD Dec 16 14:15:48.681000 audit[4403]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c488 a2=98 a3=0 items=0 ppid=4360 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:48.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238356339323731376130353665633464633031636334306333346533 Dec 16 14:15:48.684000 audit: BPF prog-id=199 op=LOAD Dec 16 14:15:48.684000 audit[4403]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00018c218 a2=98 a3=0 items=0 ppid=4360 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:48.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238356339323731376130353665633464633031636334306333346533 Dec 16 14:15:48.687000 audit: BPF prog-id=199 op=UNLOAD Dec 16 14:15:48.687000 audit[4403]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4360 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:48.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238356339323731376130353665633464633031636334306333346533 Dec 16 14:15:48.689000 audit: BPF prog-id=198 op=UNLOAD Dec 16 14:15:48.689000 audit[4403]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4360 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:48.689000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238356339323731376130353665633464633031636334306333346533 Dec 16 14:15:48.689000 audit: BPF prog-id=200 op=LOAD Dec 16 14:15:48.689000 audit[4403]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c6e8 a2=98 a3=0 items=0 ppid=4360 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:48.689000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238356339323731376130353665633464633031636334306333346533 Dec 16 14:15:48.691762 systemd[1]: Started cri-containerd-c5363e1a22efd9c0d213a4737211a0da13eaf58d1fc17d3cf694f0b2fde2c953.scope - libcontainer container c5363e1a22efd9c0d213a4737211a0da13eaf58d1fc17d3cf694f0b2fde2c953. Dec 16 14:15:48.726373 systemd-networkd[1572]: cali73c2d0f2eeb: Link UP Dec 16 14:15:48.727309 systemd-networkd[1572]: cali73c2d0f2eeb: Gained carrier Dec 16 14:15:48.747754 containerd[1683]: 2025-12-16 14:15:47.860 [INFO][4230] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 14:15:48.747754 containerd[1683]: 2025-12-16 14:15:47.912 [INFO][4230] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--6slrx.gb1.brightbox.com-k8s-coredns--674b8bbfcf--t2lk6-eth0 coredns-674b8bbfcf- kube-system 764f81db-5c32-41e4-8912-f297ff0e1255 858 0 2025-12-16 14:14:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-6slrx.gb1.brightbox.com coredns-674b8bbfcf-t2lk6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali259883134f3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="206c5ae6ce612caef4287ffb12e70c72e1e320626e670bba1bed2acc98341ecd" Namespace="kube-system" Pod="coredns-674b8bbfcf-t2lk6" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-coredns--674b8bbfcf--t2lk6-" Dec 16 14:15:48.747754 containerd[1683]: 2025-12-16 14:15:47.919 [INFO][4230] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="206c5ae6ce612caef4287ffb12e70c72e1e320626e670bba1bed2acc98341ecd" Namespace="kube-system" Pod="coredns-674b8bbfcf-t2lk6" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-coredns--674b8bbfcf--t2lk6-eth0" Dec 16 14:15:48.747754 containerd[1683]: 2025-12-16 14:15:48.421 [INFO][4318] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="206c5ae6ce612caef4287ffb12e70c72e1e320626e670bba1bed2acc98341ecd" HandleID="k8s-pod-network.206c5ae6ce612caef4287ffb12e70c72e1e320626e670bba1bed2acc98341ecd" Workload="srv--6slrx.gb1.brightbox.com-k8s-coredns--674b8bbfcf--t2lk6-eth0" Dec 16 14:15:48.747754 containerd[1683]: 2025-12-16 14:15:48.425 [INFO][4318] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="206c5ae6ce612caef4287ffb12e70c72e1e320626e670bba1bed2acc98341ecd" HandleID="k8s-pod-network.206c5ae6ce612caef4287ffb12e70c72e1e320626e670bba1bed2acc98341ecd" Workload="srv--6slrx.gb1.brightbox.com-k8s-coredns--674b8bbfcf--t2lk6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0006249c0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-6slrx.gb1.brightbox.com", "pod":"coredns-674b8bbfcf-t2lk6", "timestamp":"2025-12-16 14:15:48.421817148 +0000 UTC"}, Hostname:"srv-6slrx.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 14:15:48.747754 containerd[1683]: 2025-12-16 14:15:48.427 [INFO][4318] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 14:15:48.747754 containerd[1683]: 2025-12-16 14:15:48.427 [INFO][4318] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 14:15:48.747754 containerd[1683]: 2025-12-16 14:15:48.427 [INFO][4318] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-6slrx.gb1.brightbox.com' Dec 16 14:15:48.747754 containerd[1683]: 2025-12-16 14:15:48.448 [INFO][4318] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.206c5ae6ce612caef4287ffb12e70c72e1e320626e670bba1bed2acc98341ecd" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.747754 containerd[1683]: 2025-12-16 14:15:48.464 [INFO][4318] ipam/ipam.go 394: Looking up existing affinities for host host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.747754 containerd[1683]: 2025-12-16 14:15:48.490 [INFO][4318] ipam/ipam.go 511: Trying affinity for 192.168.102.64/26 host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.747754 containerd[1683]: 2025-12-16 14:15:48.504 [INFO][4318] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.64/26 host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.747754 containerd[1683]: 2025-12-16 14:15:48.519 [INFO][4318] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.64/26 host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.747754 containerd[1683]: 2025-12-16 14:15:48.522 [INFO][4318] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.102.64/26 handle="k8s-pod-network.206c5ae6ce612caef4287ffb12e70c72e1e320626e670bba1bed2acc98341ecd" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.747754 containerd[1683]: 2025-12-16 14:15:48.536 [INFO][4318] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.206c5ae6ce612caef4287ffb12e70c72e1e320626e670bba1bed2acc98341ecd Dec 16 14:15:48.747754 containerd[1683]: 2025-12-16 14:15:48.555 [INFO][4318] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.102.64/26 handle="k8s-pod-network.206c5ae6ce612caef4287ffb12e70c72e1e320626e670bba1bed2acc98341ecd" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.747754 containerd[1683]: 2025-12-16 14:15:48.574 [INFO][4318] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.102.70/26] block=192.168.102.64/26 handle="k8s-pod-network.206c5ae6ce612caef4287ffb12e70c72e1e320626e670bba1bed2acc98341ecd" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.747754 containerd[1683]: 2025-12-16 14:15:48.574 [INFO][4318] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.70/26] handle="k8s-pod-network.206c5ae6ce612caef4287ffb12e70c72e1e320626e670bba1bed2acc98341ecd" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.747754 containerd[1683]: 2025-12-16 14:15:48.574 [INFO][4318] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 14:15:48.747754 containerd[1683]: 2025-12-16 14:15:48.574 [INFO][4318] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.102.70/26] IPv6=[] ContainerID="206c5ae6ce612caef4287ffb12e70c72e1e320626e670bba1bed2acc98341ecd" HandleID="k8s-pod-network.206c5ae6ce612caef4287ffb12e70c72e1e320626e670bba1bed2acc98341ecd" Workload="srv--6slrx.gb1.brightbox.com-k8s-coredns--674b8bbfcf--t2lk6-eth0" Dec 16 14:15:48.752236 containerd[1683]: 2025-12-16 14:15:48.618 [INFO][4230] cni-plugin/k8s.go 418: Populated endpoint ContainerID="206c5ae6ce612caef4287ffb12e70c72e1e320626e670bba1bed2acc98341ecd" Namespace="kube-system" Pod="coredns-674b8bbfcf-t2lk6" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-coredns--674b8bbfcf--t2lk6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--6slrx.gb1.brightbox.com-k8s-coredns--674b8bbfcf--t2lk6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"764f81db-5c32-41e4-8912-f297ff0e1255", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 14, 14, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-6slrx.gb1.brightbox.com", ContainerID:"", Pod:"coredns-674b8bbfcf-t2lk6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.102.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali259883134f3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 14:15:48.752236 containerd[1683]: 2025-12-16 14:15:48.619 [INFO][4230] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.70/32] ContainerID="206c5ae6ce612caef4287ffb12e70c72e1e320626e670bba1bed2acc98341ecd" Namespace="kube-system" Pod="coredns-674b8bbfcf-t2lk6" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-coredns--674b8bbfcf--t2lk6-eth0" Dec 16 14:15:48.752236 containerd[1683]: 2025-12-16 14:15:48.619 [INFO][4230] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali259883134f3 ContainerID="206c5ae6ce612caef4287ffb12e70c72e1e320626e670bba1bed2acc98341ecd" Namespace="kube-system" Pod="coredns-674b8bbfcf-t2lk6" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-coredns--674b8bbfcf--t2lk6-eth0" Dec 16 14:15:48.752236 containerd[1683]: 2025-12-16 14:15:48.690 [INFO][4230] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="206c5ae6ce612caef4287ffb12e70c72e1e320626e670bba1bed2acc98341ecd" Namespace="kube-system" Pod="coredns-674b8bbfcf-t2lk6" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-coredns--674b8bbfcf--t2lk6-eth0" Dec 16 14:15:48.752236 containerd[1683]: 2025-12-16 14:15:48.693 [INFO][4230] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="206c5ae6ce612caef4287ffb12e70c72e1e320626e670bba1bed2acc98341ecd" Namespace="kube-system" Pod="coredns-674b8bbfcf-t2lk6" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-coredns--674b8bbfcf--t2lk6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--6slrx.gb1.brightbox.com-k8s-coredns--674b8bbfcf--t2lk6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"764f81db-5c32-41e4-8912-f297ff0e1255", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 14, 14, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-6slrx.gb1.brightbox.com", ContainerID:"206c5ae6ce612caef4287ffb12e70c72e1e320626e670bba1bed2acc98341ecd", Pod:"coredns-674b8bbfcf-t2lk6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.102.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali259883134f3", MAC:"8e:30:8e:af:23:8b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 14:15:48.752236 containerd[1683]: 2025-12-16 14:15:48.731 [INFO][4230] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="206c5ae6ce612caef4287ffb12e70c72e1e320626e670bba1bed2acc98341ecd" Namespace="kube-system" Pod="coredns-674b8bbfcf-t2lk6" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-coredns--674b8bbfcf--t2lk6-eth0" Dec 16 14:15:48.764212 containerd[1683]: 2025-12-16 14:15:48.048 [INFO][4250] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 14:15:48.764212 containerd[1683]: 2025-12-16 14:15:48.097 [INFO][4250] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--6slrx.gb1.brightbox.com-k8s-whisker--7b4555d8df--7md4r-eth0 whisker-7b4555d8df- calico-system 32682285-0749-4677-aeaa-b30aca23774e 951 0 2025-12-16 14:15:47 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7b4555d8df projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-6slrx.gb1.brightbox.com whisker-7b4555d8df-7md4r eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali73c2d0f2eeb [] [] }} ContainerID="2110ac66dcacc9e28311033ca771873f1abfc228e4971578db9d03e779105afd" Namespace="calico-system" Pod="whisker-7b4555d8df-7md4r" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-whisker--7b4555d8df--7md4r-" Dec 16 14:15:48.764212 containerd[1683]: 2025-12-16 14:15:48.098 [INFO][4250] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2110ac66dcacc9e28311033ca771873f1abfc228e4971578db9d03e779105afd" Namespace="calico-system" Pod="whisker-7b4555d8df-7md4r" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-whisker--7b4555d8df--7md4r-eth0" Dec 16 14:15:48.764212 containerd[1683]: 2025-12-16 14:15:48.511 [INFO][4386] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2110ac66dcacc9e28311033ca771873f1abfc228e4971578db9d03e779105afd" HandleID="k8s-pod-network.2110ac66dcacc9e28311033ca771873f1abfc228e4971578db9d03e779105afd" Workload="srv--6slrx.gb1.brightbox.com-k8s-whisker--7b4555d8df--7md4r-eth0" Dec 16 14:15:48.764212 containerd[1683]: 2025-12-16 14:15:48.511 [INFO][4386] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2110ac66dcacc9e28311033ca771873f1abfc228e4971578db9d03e779105afd" HandleID="k8s-pod-network.2110ac66dcacc9e28311033ca771873f1abfc228e4971578db9d03e779105afd" Workload="srv--6slrx.gb1.brightbox.com-k8s-whisker--7b4555d8df--7md4r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123880), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-6slrx.gb1.brightbox.com", "pod":"whisker-7b4555d8df-7md4r", "timestamp":"2025-12-16 14:15:48.511294305 +0000 UTC"}, Hostname:"srv-6slrx.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 14:15:48.764212 containerd[1683]: 2025-12-16 14:15:48.511 [INFO][4386] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 14:15:48.764212 containerd[1683]: 2025-12-16 14:15:48.574 [INFO][4386] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 14:15:48.764212 containerd[1683]: 2025-12-16 14:15:48.575 [INFO][4386] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-6slrx.gb1.brightbox.com' Dec 16 14:15:48.764212 containerd[1683]: 2025-12-16 14:15:48.594 [INFO][4386] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2110ac66dcacc9e28311033ca771873f1abfc228e4971578db9d03e779105afd" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.764212 containerd[1683]: 2025-12-16 14:15:48.602 [INFO][4386] ipam/ipam.go 394: Looking up existing affinities for host host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.764212 containerd[1683]: 2025-12-16 14:15:48.614 [INFO][4386] ipam/ipam.go 511: Trying affinity for 192.168.102.64/26 host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.764212 containerd[1683]: 2025-12-16 14:15:48.621 [INFO][4386] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.64/26 host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.764212 containerd[1683]: 2025-12-16 14:15:48.629 [INFO][4386] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.64/26 host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.764212 containerd[1683]: 2025-12-16 14:15:48.630 [INFO][4386] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.102.64/26 handle="k8s-pod-network.2110ac66dcacc9e28311033ca771873f1abfc228e4971578db9d03e779105afd" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.764212 containerd[1683]: 2025-12-16 14:15:48.638 [INFO][4386] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2110ac66dcacc9e28311033ca771873f1abfc228e4971578db9d03e779105afd Dec 16 14:15:48.764212 containerd[1683]: 2025-12-16 14:15:48.652 [INFO][4386] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.102.64/26 handle="k8s-pod-network.2110ac66dcacc9e28311033ca771873f1abfc228e4971578db9d03e779105afd" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.764212 containerd[1683]: 2025-12-16 14:15:48.694 [INFO][4386] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.102.71/26] block=192.168.102.64/26 handle="k8s-pod-network.2110ac66dcacc9e28311033ca771873f1abfc228e4971578db9d03e779105afd" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.764212 containerd[1683]: 2025-12-16 14:15:48.694 [INFO][4386] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.71/26] handle="k8s-pod-network.2110ac66dcacc9e28311033ca771873f1abfc228e4971578db9d03e779105afd" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:48.764212 containerd[1683]: 2025-12-16 14:15:48.694 [INFO][4386] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 14:15:48.764212 containerd[1683]: 2025-12-16 14:15:48.694 [INFO][4386] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.102.71/26] IPv6=[] ContainerID="2110ac66dcacc9e28311033ca771873f1abfc228e4971578db9d03e779105afd" HandleID="k8s-pod-network.2110ac66dcacc9e28311033ca771873f1abfc228e4971578db9d03e779105afd" Workload="srv--6slrx.gb1.brightbox.com-k8s-whisker--7b4555d8df--7md4r-eth0" Dec 16 14:15:48.765172 containerd[1683]: 2025-12-16 14:15:48.699 [INFO][4250] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2110ac66dcacc9e28311033ca771873f1abfc228e4971578db9d03e779105afd" Namespace="calico-system" Pod="whisker-7b4555d8df-7md4r" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-whisker--7b4555d8df--7md4r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--6slrx.gb1.brightbox.com-k8s-whisker--7b4555d8df--7md4r-eth0", GenerateName:"whisker-7b4555d8df-", Namespace:"calico-system", SelfLink:"", UID:"32682285-0749-4677-aeaa-b30aca23774e", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 14, 15, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b4555d8df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-6slrx.gb1.brightbox.com", ContainerID:"", Pod:"whisker-7b4555d8df-7md4r", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.102.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali73c2d0f2eeb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 14:15:48.765172 containerd[1683]: 2025-12-16 14:15:48.699 [INFO][4250] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.71/32] ContainerID="2110ac66dcacc9e28311033ca771873f1abfc228e4971578db9d03e779105afd" Namespace="calico-system" Pod="whisker-7b4555d8df-7md4r" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-whisker--7b4555d8df--7md4r-eth0" Dec 16 14:15:48.765172 containerd[1683]: 2025-12-16 14:15:48.699 [INFO][4250] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali73c2d0f2eeb ContainerID="2110ac66dcacc9e28311033ca771873f1abfc228e4971578db9d03e779105afd" Namespace="calico-system" Pod="whisker-7b4555d8df-7md4r" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-whisker--7b4555d8df--7md4r-eth0" Dec 16 14:15:48.765172 containerd[1683]: 2025-12-16 14:15:48.728 [INFO][4250] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2110ac66dcacc9e28311033ca771873f1abfc228e4971578db9d03e779105afd" Namespace="calico-system" Pod="whisker-7b4555d8df-7md4r" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-whisker--7b4555d8df--7md4r-eth0" Dec 16 14:15:48.765172 containerd[1683]: 2025-12-16 14:15:48.729 [INFO][4250] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2110ac66dcacc9e28311033ca771873f1abfc228e4971578db9d03e779105afd" Namespace="calico-system" Pod="whisker-7b4555d8df-7md4r" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-whisker--7b4555d8df--7md4r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--6slrx.gb1.brightbox.com-k8s-whisker--7b4555d8df--7md4r-eth0", GenerateName:"whisker-7b4555d8df-", Namespace:"calico-system", SelfLink:"", UID:"32682285-0749-4677-aeaa-b30aca23774e", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 14, 15, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b4555d8df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-6slrx.gb1.brightbox.com", ContainerID:"2110ac66dcacc9e28311033ca771873f1abfc228e4971578db9d03e779105afd", Pod:"whisker-7b4555d8df-7md4r", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.102.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali73c2d0f2eeb", MAC:"da:17:da:92:da:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 14:15:48.765172 containerd[1683]: 2025-12-16 14:15:48.753 [INFO][4250] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2110ac66dcacc9e28311033ca771873f1abfc228e4971578db9d03e779105afd" Namespace="calico-system" Pod="whisker-7b4555d8df-7md4r" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-whisker--7b4555d8df--7md4r-eth0" Dec 16 14:15:48.793150 containerd[1683]: time="2025-12-16T14:15:48.792864433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x9ncs,Uid:142f5ab3-cbeb-48ba-8db9-bbe60551f068,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbbd2d695f54afda7f1d08f9bc8e0f32a7b81582f14bb6d4bb8570067e8c0a90\"" Dec 16 14:15:48.831240 containerd[1683]: time="2025-12-16T14:15:48.831062251Z" level=info msg="CreateContainer within sandbox \"dbbd2d695f54afda7f1d08f9bc8e0f32a7b81582f14bb6d4bb8570067e8c0a90\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 14:15:48.832472 containerd[1683]: time="2025-12-16T14:15:48.832416004Z" level=info msg="connecting to shim 2110ac66dcacc9e28311033ca771873f1abfc228e4971578db9d03e779105afd" address="unix:///run/containerd/s/26eef3a07f25a99e48e13291eae3358d4038f1624d991a10fff36e1cca588194" namespace=k8s.io protocol=ttrpc version=3 Dec 16 14:15:48.859625 containerd[1683]: time="2025-12-16T14:15:48.859517789Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:15:48.879686 containerd[1683]: time="2025-12-16T14:15:48.879592460Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 14:15:48.879854 containerd[1683]: time="2025-12-16T14:15:48.879741239Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 14:15:48.880964 kubelet[2975]: E1216 14:15:48.880857 2975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 14:15:48.884171 kubelet[2975]: E1216 14:15:48.884118 2975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 14:15:48.894617 kubelet[2975]: E1216 14:15:48.894501 2975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tvzdp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tfzg7_calico-system(1248d2d9-77a6-4a9d-9b93-4af871a2edbf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 14:15:48.897056 containerd[1683]: time="2025-12-16T14:15:48.897022772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 14:15:48.909347 systemd-networkd[1572]: cali9c83f3ca113: Gained IPv6LL Dec 16 14:15:48.966458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount434860804.mount: Deactivated successfully. Dec 16 14:15:48.971000 audit: BPF prog-id=201 op=LOAD Dec 16 14:15:48.994132 containerd[1683]: time="2025-12-16T14:15:48.994049171Z" level=info msg="connecting to shim 206c5ae6ce612caef4287ffb12e70c72e1e320626e670bba1bed2acc98341ecd" address="unix:///run/containerd/s/97188c1e734aa16cbd4474624259ca429a855c3868a564abc730477bac853680" namespace=k8s.io protocol=ttrpc version=3 Dec 16 14:15:48.997593 systemd[1]: Started cri-containerd-2110ac66dcacc9e28311033ca771873f1abfc228e4971578db9d03e779105afd.scope - libcontainer container 2110ac66dcacc9e28311033ca771873f1abfc228e4971578db9d03e779105afd. Dec 16 14:15:48.996000 audit: BPF prog-id=202 op=LOAD Dec 16 14:15:48.996000 audit[4544]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4529 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:48.996000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335333633653161323265666439633064323133613437333732313161 Dec 16 14:15:48.996000 audit: BPF prog-id=202 op=UNLOAD Dec 16 14:15:48.996000 audit[4544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4529 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:48.996000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335333633653161323265666439633064323133613437333732313161 Dec 16 14:15:48.998000 audit: BPF prog-id=203 op=LOAD Dec 16 14:15:48.998000 audit[4544]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4529 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:48.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335333633653161323265666439633064323133613437333732313161 Dec 16 14:15:48.998000 audit: BPF prog-id=204 op=LOAD Dec 16 14:15:48.998000 audit[4544]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4529 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:48.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335333633653161323265666439633064323133613437333732313161 Dec 16 14:15:48.998000 audit: BPF prog-id=204 op=UNLOAD Dec 16 14:15:48.998000 audit[4544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4529 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:48.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335333633653161323265666439633064323133613437333732313161 Dec 16 14:15:48.998000 audit: BPF prog-id=203 op=UNLOAD Dec 16 14:15:48.998000 audit[4544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4529 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:48.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335333633653161323265666439633064323133613437333732313161 Dec 16 14:15:48.998000 audit: BPF prog-id=205 op=LOAD Dec 16 14:15:48.998000 audit[4544]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4529 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:48.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335333633653161323265666439633064323133613437333732313161 Dec 16 14:15:49.041305 containerd[1683]: time="2025-12-16T14:15:49.041235227Z" level=info msg="Container f3928ae22a1fa98cbc8a485a4476e0bed1cf3d1230723b1ddab8ffc70c2b8a72: CDI devices from CRI Config.CDIDevices: []" Dec 16 14:15:49.072446 containerd[1683]: time="2025-12-16T14:15:49.072160639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2m9mr,Uid:86b288c9-63f1-4f44-8c9e-eb5a65f83789,Namespace:calico-system,Attempt:0,} returns sandbox id \"5162e4d2125d67ca7d1b981df1467f0412588dc729c728a9cb164b6121ac9d50\"" Dec 16 14:15:49.081199 containerd[1683]: time="2025-12-16T14:15:49.080312019Z" level=info msg="CreateContainer within sandbox \"dbbd2d695f54afda7f1d08f9bc8e0f32a7b81582f14bb6d4bb8570067e8c0a90\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f3928ae22a1fa98cbc8a485a4476e0bed1cf3d1230723b1ddab8ffc70c2b8a72\"" Dec 16 14:15:49.084988 containerd[1683]: time="2025-12-16T14:15:49.084943205Z" level=info msg="StartContainer for \"f3928ae22a1fa98cbc8a485a4476e0bed1cf3d1230723b1ddab8ffc70c2b8a72\"" Dec 16 14:15:49.090681 containerd[1683]: time="2025-12-16T14:15:49.090646708Z" level=info msg="connecting to shim f3928ae22a1fa98cbc8a485a4476e0bed1cf3d1230723b1ddab8ffc70c2b8a72" address="unix:///run/containerd/s/e1e339478efae7b001948741f7a828492aab2b1e71801b04445472e71bfa4ac0" protocol=ttrpc version=3 Dec 16 14:15:49.129000 audit: BPF prog-id=206 op=LOAD Dec 16 14:15:49.137000 audit: BPF prog-id=207 op=LOAD Dec 16 14:15:49.137000 audit[4653]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00020c238 a2=98 a3=0 items=0 ppid=4608 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:49.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231313061633636646361636339653238333131303333636137373138 Dec 16 14:15:49.138000 audit: BPF prog-id=207 op=UNLOAD Dec 16 14:15:49.138000 audit[4653]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4608 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:49.138000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231313061633636646361636339653238333131303333636137373138 Dec 16 14:15:49.139000 audit: BPF prog-id=208 op=LOAD Dec 16 14:15:49.139000 audit[4653]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00020c488 a2=98 a3=0 items=0 ppid=4608 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:49.139000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231313061633636646361636339653238333131303333636137373138 Dec 16 14:15:49.139000 audit: BPF prog-id=209 op=LOAD Dec 16 14:15:49.139000 audit[4653]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00020c218 a2=98 a3=0 items=0 ppid=4608 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:49.139000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231313061633636646361636339653238333131303333636137373138 Dec 16 14:15:49.139000 audit: BPF prog-id=209 op=UNLOAD Dec 16 14:15:49.139000 audit[4653]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4608 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:49.139000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231313061633636646361636339653238333131303333636137373138 Dec 16 14:15:49.139000 audit: BPF prog-id=208 op=UNLOAD Dec 16 14:15:49.139000 audit[4653]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4608 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:49.139000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231313061633636646361636339653238333131303333636137373138 Dec 16 14:15:49.139000 audit: BPF prog-id=210 op=LOAD Dec 16 14:15:49.139000 audit[4653]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00020c6e8 a2=98 a3=0 items=0 ppid=4608 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:49.139000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231313061633636646361636339653238333131303333636137373138 Dec 16 14:15:49.192673 systemd[1]: Started cri-containerd-f3928ae22a1fa98cbc8a485a4476e0bed1cf3d1230723b1ddab8ffc70c2b8a72.scope - libcontainer container f3928ae22a1fa98cbc8a485a4476e0bed1cf3d1230723b1ddab8ffc70c2b8a72. Dec 16 14:15:49.207280 containerd[1683]: time="2025-12-16T14:15:49.203166426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8c8c88d5b-gxh2p,Uid:5790375d-68f1-4555-984f-974084235d42,Namespace:calico-system,Attempt:0,} returns sandbox id \"b85c92717a056ec4dc01cc40c34e3b4300b61eb4eaf611ce3fe6702ebe3bc0ef\"" Dec 16 14:15:49.216440 systemd[1]: Started cri-containerd-206c5ae6ce612caef4287ffb12e70c72e1e320626e670bba1bed2acc98341ecd.scope - libcontainer container 206c5ae6ce612caef4287ffb12e70c72e1e320626e670bba1bed2acc98341ecd. Dec 16 14:15:49.262000 audit: BPF prog-id=211 op=LOAD Dec 16 14:15:49.264000 audit: BPF prog-id=212 op=LOAD Dec 16 14:15:49.264000 audit[4689]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000232238 a2=98 a3=0 items=0 ppid=4650 pid=4689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:49.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230366335616536636536313263616566343238376666623132653730 Dec 16 14:15:49.264000 audit: BPF prog-id=212 op=UNLOAD Dec 16 14:15:49.264000 audit[4689]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4650 pid=4689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:49.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230366335616536636536313263616566343238376666623132653730 Dec 16 14:15:49.265000 audit: BPF prog-id=213 op=LOAD Dec 16 14:15:49.265000 audit[4689]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000232488 a2=98 a3=0 items=0 ppid=4650 pid=4689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:49.265000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230366335616536636536313263616566343238376666623132653730 Dec 16 14:15:49.266000 audit: BPF prog-id=214 op=LOAD Dec 16 14:15:49.266000 audit[4689]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000232218 a2=98 a3=0 items=0 ppid=4650 pid=4689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:49.266000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230366335616536636536313263616566343238376666623132653730 Dec 16 14:15:49.266000 audit: BPF prog-id=214 op=UNLOAD Dec 16 14:15:49.266000 audit[4689]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4650 pid=4689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:49.266000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230366335616536636536313263616566343238376666623132653730 Dec 16 14:15:49.266000 audit: BPF prog-id=213 op=UNLOAD Dec 16 14:15:49.266000 audit[4689]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4650 pid=4689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:49.266000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230366335616536636536313263616566343238376666623132653730 Dec 16 14:15:49.266000 audit: BPF prog-id=215 op=LOAD Dec 16 14:15:49.266000 audit[4689]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002326e8 a2=98 a3=0 items=0 ppid=4650 pid=4689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:49.266000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230366335616536636536313263616566343238376666623132653730 Dec 16 14:15:49.294387 systemd-networkd[1572]: calib5f1342c415: Gained IPv6LL Dec 16 14:15:49.295000 audit: BPF prog-id=216 op=LOAD Dec 16 14:15:49.296982 systemd-networkd[1572]: cali1bdb974b590: Link UP Dec 16 14:15:49.298000 audit: BPF prog-id=217 op=LOAD Dec 16 14:15:49.298000 audit[4711]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4407 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:49.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633393238616532326131666139386362633861343835613434373665 Dec 16 14:15:49.298000 audit: BPF prog-id=217 op=UNLOAD Dec 16 14:15:49.298000 audit[4711]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4407 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:49.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633393238616532326131666139386362633861343835613434373665 Dec 16 14:15:49.298000 audit: BPF prog-id=218 op=LOAD Dec 16 14:15:49.298000 audit[4711]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4407 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:49.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633393238616532326131666139386362633861343835613434373665 Dec 16 14:15:49.300306 systemd-networkd[1572]: cali1bdb974b590: Gained carrier Dec 16 14:15:49.299000 audit: BPF prog-id=219 op=LOAD Dec 16 14:15:49.299000 audit[4711]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4407 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:49.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633393238616532326131666139386362633861343835613434373665 Dec 16 14:15:49.300000 audit: BPF prog-id=219 op=UNLOAD Dec 16 14:15:49.300000 audit[4711]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4407 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:49.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633393238616532326131666139386362633861343835613434373665 Dec 16 14:15:49.300000 audit: BPF prog-id=218 op=UNLOAD Dec 16 14:15:49.300000 audit[4711]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4407 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:49.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633393238616532326131666139386362633861343835613434373665 Dec 16 14:15:49.301000 audit: BPF prog-id=220 op=LOAD Dec 16 14:15:49.301000 audit[4711]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4407 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:49.301000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633393238616532326131666139386362633861343835613434373665 Dec 16 14:15:49.337402 containerd[1683]: 2025-12-16 14:15:48.937 [INFO][4556] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 14:15:49.337402 containerd[1683]: 2025-12-16 14:15:49.036 [INFO][4556] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--6slrx.gb1.brightbox.com-k8s-calico--apiserver--66659c8785--q7tzf-eth0 calico-apiserver-66659c8785- calico-apiserver 8ce9819e-d54a-4fa9-97c3-d8a0be2ecea0 871 0 2025-12-16 14:15:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66659c8785 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-6slrx.gb1.brightbox.com calico-apiserver-66659c8785-q7tzf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1bdb974b590 [] [] }} ContainerID="650a2a8d918d5e07a5d672ca104e98b5b9aaa73b103aedc0560c08de12e9a9cf" Namespace="calico-apiserver" Pod="calico-apiserver-66659c8785-q7tzf" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-calico--apiserver--66659c8785--q7tzf-" Dec 16 14:15:49.337402 containerd[1683]: 2025-12-16 14:15:49.036 [INFO][4556] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="650a2a8d918d5e07a5d672ca104e98b5b9aaa73b103aedc0560c08de12e9a9cf" Namespace="calico-apiserver" Pod="calico-apiserver-66659c8785-q7tzf" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-calico--apiserver--66659c8785--q7tzf-eth0" Dec 16 14:15:49.337402 containerd[1683]: 2025-12-16 14:15:49.134 [INFO][4699] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="650a2a8d918d5e07a5d672ca104e98b5b9aaa73b103aedc0560c08de12e9a9cf" HandleID="k8s-pod-network.650a2a8d918d5e07a5d672ca104e98b5b9aaa73b103aedc0560c08de12e9a9cf" Workload="srv--6slrx.gb1.brightbox.com-k8s-calico--apiserver--66659c8785--q7tzf-eth0" Dec 16 14:15:49.337402 containerd[1683]: 2025-12-16 14:15:49.135 [INFO][4699] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="650a2a8d918d5e07a5d672ca104e98b5b9aaa73b103aedc0560c08de12e9a9cf" HandleID="k8s-pod-network.650a2a8d918d5e07a5d672ca104e98b5b9aaa73b103aedc0560c08de12e9a9cf" Workload="srv--6slrx.gb1.brightbox.com-k8s-calico--apiserver--66659c8785--q7tzf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039d7b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-6slrx.gb1.brightbox.com", "pod":"calico-apiserver-66659c8785-q7tzf", "timestamp":"2025-12-16 14:15:49.134824072 +0000 UTC"}, Hostname:"srv-6slrx.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 14:15:49.337402 containerd[1683]: 2025-12-16 14:15:49.135 [INFO][4699] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 14:15:49.337402 containerd[1683]: 2025-12-16 14:15:49.135 [INFO][4699] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 14:15:49.337402 containerd[1683]: 2025-12-16 14:15:49.135 [INFO][4699] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-6slrx.gb1.brightbox.com' Dec 16 14:15:49.337402 containerd[1683]: 2025-12-16 14:15:49.167 [INFO][4699] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.650a2a8d918d5e07a5d672ca104e98b5b9aaa73b103aedc0560c08de12e9a9cf" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:49.337402 containerd[1683]: 2025-12-16 14:15:49.184 [INFO][4699] ipam/ipam.go 394: Looking up existing affinities for host host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:49.337402 containerd[1683]: 2025-12-16 14:15:49.199 [INFO][4699] ipam/ipam.go 511: Trying affinity for 192.168.102.64/26 host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:49.337402 containerd[1683]: 2025-12-16 14:15:49.211 [INFO][4699] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.64/26 host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:49.337402 containerd[1683]: 2025-12-16 14:15:49.222 [INFO][4699] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.64/26 host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:49.337402 containerd[1683]: 2025-12-16 14:15:49.222 [INFO][4699] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.102.64/26 handle="k8s-pod-network.650a2a8d918d5e07a5d672ca104e98b5b9aaa73b103aedc0560c08de12e9a9cf" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:49.337402 containerd[1683]: 2025-12-16 14:15:49.226 [INFO][4699] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.650a2a8d918d5e07a5d672ca104e98b5b9aaa73b103aedc0560c08de12e9a9cf Dec 16 14:15:49.337402 containerd[1683]: 2025-12-16 14:15:49.250 [INFO][4699] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.102.64/26 handle="k8s-pod-network.650a2a8d918d5e07a5d672ca104e98b5b9aaa73b103aedc0560c08de12e9a9cf" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:49.337402 containerd[1683]: 2025-12-16 14:15:49.276 [INFO][4699] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.102.72/26] block=192.168.102.64/26 handle="k8s-pod-network.650a2a8d918d5e07a5d672ca104e98b5b9aaa73b103aedc0560c08de12e9a9cf" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:49.337402 containerd[1683]: 2025-12-16 14:15:49.277 [INFO][4699] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.72/26] handle="k8s-pod-network.650a2a8d918d5e07a5d672ca104e98b5b9aaa73b103aedc0560c08de12e9a9cf" host="srv-6slrx.gb1.brightbox.com" Dec 16 14:15:49.337402 containerd[1683]: 2025-12-16 14:15:49.277 [INFO][4699] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 14:15:49.337402 containerd[1683]: 2025-12-16 14:15:49.277 [INFO][4699] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.102.72/26] IPv6=[] ContainerID="650a2a8d918d5e07a5d672ca104e98b5b9aaa73b103aedc0560c08de12e9a9cf" HandleID="k8s-pod-network.650a2a8d918d5e07a5d672ca104e98b5b9aaa73b103aedc0560c08de12e9a9cf" Workload="srv--6slrx.gb1.brightbox.com-k8s-calico--apiserver--66659c8785--q7tzf-eth0" Dec 16 14:15:49.338374 containerd[1683]: 2025-12-16 14:15:49.281 [INFO][4556] cni-plugin/k8s.go 418: Populated endpoint ContainerID="650a2a8d918d5e07a5d672ca104e98b5b9aaa73b103aedc0560c08de12e9a9cf" Namespace="calico-apiserver" Pod="calico-apiserver-66659c8785-q7tzf" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-calico--apiserver--66659c8785--q7tzf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--6slrx.gb1.brightbox.com-k8s-calico--apiserver--66659c8785--q7tzf-eth0", GenerateName:"calico-apiserver-66659c8785-", Namespace:"calico-apiserver", SelfLink:"", UID:"8ce9819e-d54a-4fa9-97c3-d8a0be2ecea0", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 14, 15, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66659c8785", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-6slrx.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-66659c8785-q7tzf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1bdb974b590", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 14:15:49.338374 containerd[1683]: 2025-12-16 14:15:49.281 [INFO][4556] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.72/32] ContainerID="650a2a8d918d5e07a5d672ca104e98b5b9aaa73b103aedc0560c08de12e9a9cf" Namespace="calico-apiserver" Pod="calico-apiserver-66659c8785-q7tzf" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-calico--apiserver--66659c8785--q7tzf-eth0" Dec 16 14:15:49.338374 containerd[1683]: 2025-12-16 14:15:49.281 [INFO][4556] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1bdb974b590 ContainerID="650a2a8d918d5e07a5d672ca104e98b5b9aaa73b103aedc0560c08de12e9a9cf" Namespace="calico-apiserver" Pod="calico-apiserver-66659c8785-q7tzf" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-calico--apiserver--66659c8785--q7tzf-eth0" Dec 16 14:15:49.338374 containerd[1683]: 2025-12-16 14:15:49.302 [INFO][4556] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="650a2a8d918d5e07a5d672ca104e98b5b9aaa73b103aedc0560c08de12e9a9cf" Namespace="calico-apiserver" Pod="calico-apiserver-66659c8785-q7tzf" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-calico--apiserver--66659c8785--q7tzf-eth0" Dec 16 14:15:49.338374 containerd[1683]: 2025-12-16 14:15:49.304 [INFO][4556] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="650a2a8d918d5e07a5d672ca104e98b5b9aaa73b103aedc0560c08de12e9a9cf" Namespace="calico-apiserver" Pod="calico-apiserver-66659c8785-q7tzf" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-calico--apiserver--66659c8785--q7tzf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--6slrx.gb1.brightbox.com-k8s-calico--apiserver--66659c8785--q7tzf-eth0", GenerateName:"calico-apiserver-66659c8785-", Namespace:"calico-apiserver", SelfLink:"", UID:"8ce9819e-d54a-4fa9-97c3-d8a0be2ecea0", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 14, 15, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66659c8785", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-6slrx.gb1.brightbox.com", ContainerID:"650a2a8d918d5e07a5d672ca104e98b5b9aaa73b103aedc0560c08de12e9a9cf", Pod:"calico-apiserver-66659c8785-q7tzf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1bdb974b590", MAC:"02:e0:05:e4:3c:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 14:15:49.338374 containerd[1683]: 2025-12-16 14:15:49.329 [INFO][4556] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="650a2a8d918d5e07a5d672ca104e98b5b9aaa73b103aedc0560c08de12e9a9cf" Namespace="calico-apiserver" Pod="calico-apiserver-66659c8785-q7tzf" WorkloadEndpoint="srv--6slrx.gb1.brightbox.com-k8s-calico--apiserver--66659c8785--q7tzf-eth0" Dec 16 14:15:49.366537 containerd[1683]: time="2025-12-16T14:15:49.365783698Z" level=info msg="StartContainer for \"f3928ae22a1fa98cbc8a485a4476e0bed1cf3d1230723b1ddab8ffc70c2b8a72\" returns successfully" Dec 16 14:15:49.371628 containerd[1683]: time="2025-12-16T14:15:49.371587928Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:15:49.375150 containerd[1683]: time="2025-12-16T14:15:49.375106989Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 14:15:49.375381 containerd[1683]: time="2025-12-16T14:15:49.375223006Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 14:15:49.378807 kubelet[2975]: E1216 14:15:49.378733 2975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 14:15:49.379140 kubelet[2975]: E1216 14:15:49.378988 2975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 14:15:49.380816 kubelet[2975]: E1216 14:15:49.380604 2975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tvzdp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tfzg7_calico-system(1248d2d9-77a6-4a9d-9b93-4af871a2edbf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 14:15:49.381494 containerd[1683]: time="2025-12-16T14:15:49.380748580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 14:15:49.396930 kubelet[2975]: E1216 14:15:49.396854 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tfzg7" podUID="1248d2d9-77a6-4a9d-9b93-4af871a2edbf" Dec 16 14:15:49.430145 containerd[1683]: time="2025-12-16T14:15:49.430080467Z" level=info msg="connecting to shim 650a2a8d918d5e07a5d672ca104e98b5b9aaa73b103aedc0560c08de12e9a9cf" address="unix:///run/containerd/s/8e578275a3abf070fc678d5bc1522bda27a6f64edab7aba6f4fc75570521b3c8" namespace=k8s.io protocol=ttrpc version=3 Dec 16 14:15:49.439851 containerd[1683]: time="2025-12-16T14:15:49.439799379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t2lk6,Uid:764f81db-5c32-41e4-8912-f297ff0e1255,Namespace:kube-system,Attempt:0,} returns sandbox id \"206c5ae6ce612caef4287ffb12e70c72e1e320626e670bba1bed2acc98341ecd\"" Dec 16 14:15:49.450675 containerd[1683]: time="2025-12-16T14:15:49.450623786Z" level=info msg="CreateContainer within sandbox \"206c5ae6ce612caef4287ffb12e70c72e1e320626e670bba1bed2acc98341ecd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 14:15:49.484705 containerd[1683]: time="2025-12-16T14:15:49.484344500Z" level=info msg="Container ce1da6a61d49078575bdf601083c9d72245b1421d2121d2295be12beab24cbc5: CDI devices from CRI Config.CDIDevices: []" Dec 16 14:15:49.521226 containerd[1683]: time="2025-12-16T14:15:49.520990769Z" level=info msg="CreateContainer within sandbox \"206c5ae6ce612caef4287ffb12e70c72e1e320626e670bba1bed2acc98341ecd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ce1da6a61d49078575bdf601083c9d72245b1421d2121d2295be12beab24cbc5\"" Dec 16 14:15:49.528510 containerd[1683]: time="2025-12-16T14:15:49.528442386Z" level=info msg="StartContainer for \"ce1da6a61d49078575bdf601083c9d72245b1421d2121d2295be12beab24cbc5\"" Dec 16 14:15:49.546449 containerd[1683]: time="2025-12-16T14:15:49.540739937Z" level=info msg="connecting to shim ce1da6a61d49078575bdf601083c9d72245b1421d2121d2295be12beab24cbc5" address="unix:///run/containerd/s/97188c1e734aa16cbd4474624259ca429a855c3868a564abc730477bac853680" protocol=ttrpc version=3 Dec 16 14:15:49.559603 containerd[1683]: time="2025-12-16T14:15:49.558485258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66659c8785-m7865,Uid:6e75d316-f851-4233-b053-cd9dc148b92b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c5363e1a22efd9c0d213a4737211a0da13eaf58d1fc17d3cf694f0b2fde2c953\"" Dec 16 14:15:49.563595 systemd[1]: Started cri-containerd-650a2a8d918d5e07a5d672ca104e98b5b9aaa73b103aedc0560c08de12e9a9cf.scope - libcontainer container 650a2a8d918d5e07a5d672ca104e98b5b9aaa73b103aedc0560c08de12e9a9cf. Dec 16 14:15:49.628441 systemd[1]: Started cri-containerd-ce1da6a61d49078575bdf601083c9d72245b1421d2121d2295be12beab24cbc5.scope - libcontainer container ce1da6a61d49078575bdf601083c9d72245b1421d2121d2295be12beab24cbc5. Dec 16 14:15:49.700000 audit: BPF prog-id=221 op=LOAD Dec 16 14:15:49.701000 audit: BPF prog-id=222 op=LOAD Dec 16 14:15:49.701000 audit[4825]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4650 pid=4825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:49.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365316461366136316434393037383537356264663630313038336339 Dec 16 14:15:49.701000 audit: BPF prog-id=222 op=UNLOAD Dec 16 14:15:49.701000 audit[4825]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4650 pid=4825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:49.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365316461366136316434393037383537356264663630313038336339 Dec 16 14:15:49.703000 audit: BPF prog-id=223 op=LOAD Dec 16 14:15:49.703000 audit[4825]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4650 pid=4825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:49.703000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365316461366136316434393037383537356264663630313038336339 Dec 16 14:15:49.703000 audit: BPF prog-id=224 op=LOAD Dec 16 14:15:49.703000 audit[4825]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4650 pid=4825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:49.703000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365316461366136316434393037383537356264663630313038336339 Dec 16 14:15:49.703000 audit: BPF prog-id=224 op=UNLOAD Dec 16 14:15:49.703000 audit[4825]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4650 pid=4825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:49.703000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365316461366136316434393037383537356264663630313038336339 Dec 16 14:15:49.703000 audit: BPF prog-id=223 op=UNLOAD Dec 16 14:15:49.703000 audit[4825]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4650 pid=4825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:49.703000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365316461366136316434393037383537356264663630313038336339 Dec 16 14:15:49.703000 audit: BPF prog-id=225 op=LOAD Dec 16 14:15:49.703000 audit[4825]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4650 pid=4825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:49.703000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365316461366136316434393037383537356264663630313038336339 Dec 16 14:15:49.712044 containerd[1683]: time="2025-12-16T14:15:49.711987434Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:15:49.713136 containerd[1683]: time="2025-12-16T14:15:49.712855724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b4555d8df-7md4r,Uid:32682285-0749-4677-aeaa-b30aca23774e,Namespace:calico-system,Attempt:0,} returns sandbox id \"2110ac66dcacc9e28311033ca771873f1abfc228e4971578db9d03e779105afd\"" Dec 16 14:15:49.714203 containerd[1683]: time="2025-12-16T14:15:49.713726085Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 14:15:49.714203 containerd[1683]: time="2025-12-16T14:15:49.713975093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 14:15:49.714991 kubelet[2975]: E1216 14:15:49.714570 2975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 14:15:49.715343 kubelet[2975]: E1216 14:15:49.715111 2975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 14:15:49.715858 kubelet[2975]: E1216 14:15:49.715764 2975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tb8l9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-2m9mr_calico-system(86b288c9-63f1-4f44-8c9e-eb5a65f83789): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 14:15:49.716690 containerd[1683]: time="2025-12-16T14:15:49.716515376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 14:15:49.717248 kubelet[2975]: E1216 14:15:49.717116 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2m9mr" podUID="86b288c9-63f1-4f44-8c9e-eb5a65f83789" Dec 16 14:15:49.777771 containerd[1683]: time="2025-12-16T14:15:49.777467065Z" level=info msg="StartContainer for \"ce1da6a61d49078575bdf601083c9d72245b1421d2121d2295be12beab24cbc5\" returns successfully" Dec 16 14:15:49.782000 audit: BPF prog-id=226 op=LOAD Dec 16 14:15:49.783000 audit: BPF prog-id=227 op=LOAD Dec 16 14:15:49.783000 audit[4805]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4793 pid=4805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:49.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635306132613864393138643565303761356436373263613130346539 Dec 16 14:15:49.783000 audit: BPF prog-id=227 op=UNLOAD Dec 16 14:15:49.783000 audit[4805]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4793 pid=4805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:49.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635306132613864393138643565303761356436373263613130346539 Dec 16 14:15:49.783000 audit: BPF prog-id=228 op=LOAD Dec 16 14:15:49.783000 audit[4805]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4793 pid=4805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:49.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635306132613864393138643565303761356436373263613130346539 Dec 16 14:15:49.783000 audit: BPF prog-id=229 op=LOAD Dec 16 14:15:49.783000 audit[4805]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4793 pid=4805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:49.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635306132613864393138643565303761356436373263613130346539 Dec 16 14:15:49.784000 audit: BPF prog-id=229 op=UNLOAD Dec 16 14:15:49.784000 audit[4805]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4793 pid=4805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:49.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635306132613864393138643565303761356436373263613130346539 Dec 16 14:15:49.784000 audit: BPF prog-id=228 op=UNLOAD Dec 16 14:15:49.784000 audit[4805]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4793 pid=4805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:49.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635306132613864393138643565303761356436373263613130346539 Dec 16 14:15:49.784000 audit: BPF prog-id=230 op=LOAD Dec 16 14:15:49.784000 audit[4805]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4793 pid=4805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:49.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635306132613864393138643565303761356436373263613130346539 Dec 16 14:15:49.870386 systemd-networkd[1572]: cali31e0b241fc3: Gained IPv6LL Dec 16 14:15:49.974550 containerd[1683]: time="2025-12-16T14:15:49.974499446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66659c8785-q7tzf,Uid:8ce9819e-d54a-4fa9-97c3-d8a0be2ecea0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"650a2a8d918d5e07a5d672ca104e98b5b9aaa73b103aedc0560c08de12e9a9cf\"" Dec 16 14:15:49.997429 systemd-networkd[1572]: cali734e82f2e8f: Gained IPv6LL Dec 16 14:15:50.054406 containerd[1683]: time="2025-12-16T14:15:50.054315858Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:15:50.055507 containerd[1683]: time="2025-12-16T14:15:50.055474005Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 14:15:50.055590 containerd[1683]: time="2025-12-16T14:15:50.055548343Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 14:15:50.056898 kubelet[2975]: E1216 14:15:50.056804 2975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 14:15:50.057412 kubelet[2975]: E1216 14:15:50.056916 2975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 14:15:50.057637 containerd[1683]: time="2025-12-16T14:15:50.057579383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 14:15:50.059472 kubelet[2975]: E1216 14:15:50.058232 2975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hwp5p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8c8c88d5b-gxh2p_calico-system(5790375d-68f1-4555-984f-974084235d42): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 14:15:50.060810 kubelet[2975]: E1216 14:15:50.060202 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8c8c88d5b-gxh2p" podUID="5790375d-68f1-4555-984f-974084235d42" Dec 16 14:15:50.253432 systemd-networkd[1572]: cali73c2d0f2eeb: Gained IPv6LL Dec 16 14:15:50.312970 kubelet[2975]: E1216 14:15:50.311649 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8c8c88d5b-gxh2p" podUID="5790375d-68f1-4555-984f-974084235d42" Dec 16 14:15:50.317472 systemd-networkd[1572]: cali259883134f3: Gained IPv6LL Dec 16 14:15:50.322560 kubelet[2975]: E1216 14:15:50.322325 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2m9mr" podUID="86b288c9-63f1-4f44-8c9e-eb5a65f83789" Dec 16 14:15:50.327707 kubelet[2975]: E1216 14:15:50.327613 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tfzg7" podUID="1248d2d9-77a6-4a9d-9b93-4af871a2edbf" Dec 16 14:15:50.360032 kubelet[2975]: I1216 14:15:50.355690 2975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-x9ncs" podStartSLOduration=52.355635659 podStartE2EDuration="52.355635659s" podCreationTimestamp="2025-12-16 14:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 14:15:50.327950688 +0000 UTC m=+59.099592507" watchObservedRunningTime="2025-12-16 14:15:50.355635659 +0000 UTC m=+59.127277455" Dec 16 14:15:50.366840 containerd[1683]: time="2025-12-16T14:15:50.366373715Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:15:50.367868 containerd[1683]: time="2025-12-16T14:15:50.367741183Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 14:15:50.368481 containerd[1683]: time="2025-12-16T14:15:50.367930626Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 14:15:50.384416 kubelet[2975]: E1216 14:15:50.384360 2975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 14:15:50.384416 kubelet[2975]: E1216 14:15:50.384422 2975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 14:15:50.385447 containerd[1683]: time="2025-12-16T14:15:50.385233459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 14:15:50.387908 kubelet[2975]: E1216 14:15:50.387700 2975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dkmx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66659c8785-m7865_calico-apiserver(6e75d316-f851-4233-b053-cd9dc148b92b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 14:15:50.389047 kubelet[2975]: E1216 14:15:50.389008 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66659c8785-m7865" podUID="6e75d316-f851-4233-b053-cd9dc148b92b" Dec 16 14:15:50.396393 kubelet[2975]: I1216 14:15:50.396159 2975 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-t2lk6" podStartSLOduration=52.396134349 podStartE2EDuration="52.396134349s" podCreationTimestamp="2025-12-16 14:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 14:15:50.360400775 +0000 UTC m=+59.132042578" watchObservedRunningTime="2025-12-16 14:15:50.396134349 +0000 UTC m=+59.167776155" Dec 16 14:15:50.445396 systemd-networkd[1572]: cali1bdb974b590: Gained IPv6LL Dec 16 14:15:50.514000 audit[4915]: NETFILTER_CFG table=filter:121 family=2 entries=17 op=nft_register_rule pid=4915 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:50.525270 kernel: kauditd_printk_skb: 225 callbacks suppressed Dec 16 14:15:50.525860 kernel: audit: type=1325 audit(1765894550.514:670): table=filter:121 family=2 entries=17 op=nft_register_rule pid=4915 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:50.534296 kernel: audit: type=1300 audit(1765894550.514:670): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd7ce53ff0 a2=0 a3=7ffd7ce53fdc items=0 ppid=3086 pid=4915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.514000 audit[4915]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd7ce53ff0 a2=0 a3=7ffd7ce53fdc items=0 ppid=3086 pid=4915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.514000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:15:50.541420 kernel: audit: type=1327 audit(1765894550.514:670): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:15:50.535000 audit[4915]: NETFILTER_CFG table=nat:122 family=2 entries=35 op=nft_register_chain pid=4915 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:50.549199 kernel: audit: type=1325 audit(1765894550.535:671): table=nat:122 family=2 entries=35 op=nft_register_chain pid=4915 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:50.535000 audit[4915]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd7ce53ff0 a2=0 a3=7ffd7ce53fdc items=0 ppid=3086 pid=4915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.557485 kernel: audit: type=1300 audit(1765894550.535:671): arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd7ce53ff0 a2=0 a3=7ffd7ce53fdc items=0 ppid=3086 pid=4915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.535000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:15:50.565207 kernel: audit: type=1327 audit(1765894550.535:671): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:15:50.590000 audit[4927]: NETFILTER_CFG table=filter:123 family=2 entries=14 op=nft_register_rule pid=4927 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:50.590000 audit[4927]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff39f6f040 a2=0 a3=7fff39f6f02c items=0 ppid=3086 pid=4927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.597049 kernel: audit: type=1325 audit(1765894550.590:672): table=filter:123 family=2 entries=14 op=nft_register_rule pid=4927 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:50.597144 kernel: audit: type=1300 audit(1765894550.590:672): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff39f6f040 a2=0 a3=7fff39f6f02c items=0 ppid=3086 pid=4927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.590000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:15:50.605216 kernel: audit: type=1327 audit(1765894550.590:672): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:15:50.615000 audit[4927]: NETFILTER_CFG table=nat:124 family=2 entries=56 op=nft_register_chain pid=4927 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:50.621332 kernel: audit: type=1325 audit(1765894550.615:673): table=nat:124 family=2 entries=56 op=nft_register_chain pid=4927 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:50.615000 audit[4927]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff39f6f040 a2=0 a3=7fff39f6f02c items=0 ppid=3086 pid=4927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.615000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:15:50.642000 audit: BPF prog-id=231 op=LOAD Dec 16 14:15:50.642000 audit[4934]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffed1f2d4d0 a2=98 a3=1fffffffffffffff items=0 ppid=4468 pid=4934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.642000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 14:15:50.642000 audit: BPF prog-id=231 op=UNLOAD Dec 16 14:15:50.642000 audit[4934]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffed1f2d4a0 a3=0 items=0 ppid=4468 pid=4934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.642000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 14:15:50.643000 audit: BPF prog-id=232 op=LOAD Dec 16 14:15:50.643000 audit[4934]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffed1f2d3b0 a2=94 a3=3 items=0 ppid=4468 pid=4934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.643000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 14:15:50.643000 audit: BPF prog-id=232 op=UNLOAD Dec 16 14:15:50.643000 audit[4934]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffed1f2d3b0 a2=94 a3=3 items=0 ppid=4468 pid=4934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.643000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 14:15:50.643000 audit: BPF prog-id=233 op=LOAD Dec 16 14:15:50.643000 audit[4934]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffed1f2d3f0 a2=94 a3=7ffed1f2d5d0 items=0 ppid=4468 pid=4934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.643000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 14:15:50.643000 audit: BPF prog-id=233 op=UNLOAD Dec 16 14:15:50.643000 audit[4934]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffed1f2d3f0 a2=94 a3=7ffed1f2d5d0 items=0 ppid=4468 pid=4934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.643000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 14:15:50.645000 audit: BPF prog-id=234 op=LOAD Dec 16 14:15:50.645000 audit[4935]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff67fc040 a2=98 a3=3 items=0 ppid=4468 pid=4935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.645000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:15:50.646000 audit: BPF prog-id=234 op=UNLOAD Dec 16 14:15:50.646000 audit[4935]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffff67fc010 a3=0 items=0 ppid=4468 pid=4935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.646000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:15:50.646000 audit: BPF prog-id=235 op=LOAD Dec 16 14:15:50.646000 audit[4935]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffff67fbe30 a2=94 a3=54428f items=0 ppid=4468 pid=4935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.646000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:15:50.647000 audit: BPF prog-id=235 op=UNLOAD Dec 16 14:15:50.647000 audit[4935]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffff67fbe30 a2=94 a3=54428f items=0 ppid=4468 pid=4935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.647000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:15:50.647000 audit: BPF prog-id=236 op=LOAD Dec 16 14:15:50.647000 audit[4935]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffff67fbe60 a2=94 a3=2 items=0 ppid=4468 pid=4935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.647000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:15:50.647000 audit: BPF prog-id=236 op=UNLOAD Dec 16 14:15:50.647000 audit[4935]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffff67fbe60 a2=0 a3=2 items=0 ppid=4468 pid=4935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.647000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:15:50.705434 containerd[1683]: time="2025-12-16T14:15:50.705357873Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:15:50.712592 containerd[1683]: time="2025-12-16T14:15:50.712450642Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 14:15:50.712592 containerd[1683]: time="2025-12-16T14:15:50.712516318Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 14:15:50.713648 kubelet[2975]: E1216 14:15:50.713320 2975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 14:15:50.713648 kubelet[2975]: E1216 14:15:50.713414 2975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 14:15:50.713861 containerd[1683]: time="2025-12-16T14:15:50.713817763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 14:15:50.715411 kubelet[2975]: E1216 14:15:50.715332 2975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ec4a891a20f447ee9adfa1d6478964a6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wbf5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7b4555d8df-7md4r_calico-system(32682285-0749-4677-aeaa-b30aca23774e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 14:15:50.887000 audit: BPF prog-id=237 op=LOAD Dec 16 14:15:50.887000 audit[4935]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffff67fbd20 a2=94 a3=1 items=0 ppid=4468 pid=4935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.887000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:15:50.887000 audit: BPF prog-id=237 op=UNLOAD Dec 16 14:15:50.887000 audit[4935]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffff67fbd20 a2=94 a3=1 items=0 ppid=4468 pid=4935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.887000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:15:50.901000 audit: BPF prog-id=238 op=LOAD Dec 16 14:15:50.901000 audit[4935]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffff67fbd10 a2=94 a3=4 items=0 ppid=4468 pid=4935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.901000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:15:50.901000 audit: BPF prog-id=238 op=UNLOAD Dec 16 14:15:50.901000 audit[4935]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffff67fbd10 a2=0 a3=4 items=0 ppid=4468 pid=4935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.901000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:15:50.902000 audit: BPF prog-id=239 op=LOAD Dec 16 14:15:50.902000 audit[4935]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffff67fbb70 a2=94 a3=5 items=0 ppid=4468 pid=4935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.902000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:15:50.902000 audit: BPF prog-id=239 op=UNLOAD Dec 16 14:15:50.902000 audit[4935]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffff67fbb70 a2=0 a3=5 items=0 ppid=4468 pid=4935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.902000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:15:50.902000 audit: BPF prog-id=240 op=LOAD Dec 16 14:15:50.902000 audit[4935]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffff67fbd90 a2=94 a3=6 items=0 ppid=4468 pid=4935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.902000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:15:50.902000 audit: BPF prog-id=240 op=UNLOAD Dec 16 14:15:50.902000 audit[4935]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffff67fbd90 a2=0 a3=6 items=0 ppid=4468 pid=4935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.902000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:15:50.902000 audit: BPF prog-id=241 op=LOAD Dec 16 14:15:50.902000 audit[4935]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffff67fb540 a2=94 a3=88 items=0 ppid=4468 pid=4935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.902000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:15:50.903000 audit: BPF prog-id=242 op=LOAD Dec 16 14:15:50.903000 audit[4935]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffff67fb3c0 a2=94 a3=2 items=0 ppid=4468 pid=4935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.903000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:15:50.903000 audit: BPF prog-id=242 op=UNLOAD Dec 16 14:15:50.903000 audit[4935]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffff67fb3f0 a2=0 a3=7ffff67fb4f0 items=0 ppid=4468 pid=4935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.903000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:15:50.903000 audit: BPF prog-id=241 op=UNLOAD Dec 16 14:15:50.903000 audit[4935]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=32e28d10 a2=0 a3=bd9f7e387aad8b54 items=0 ppid=4468 pid=4935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.903000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:15:50.935000 audit: BPF prog-id=243 op=LOAD Dec 16 14:15:50.935000 audit[4940]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd8b982820 a2=98 a3=1999999999999999 items=0 ppid=4468 pid=4940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.935000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 14:15:50.936000 audit: BPF prog-id=243 op=UNLOAD Dec 16 14:15:50.936000 audit[4940]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd8b9827f0 a3=0 items=0 ppid=4468 pid=4940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.936000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 14:15:50.936000 audit: BPF prog-id=244 op=LOAD Dec 16 14:15:50.936000 audit[4940]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd8b982700 a2=94 a3=ffff items=0 ppid=4468 pid=4940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.936000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 14:15:50.936000 audit: BPF prog-id=244 op=UNLOAD Dec 16 14:15:50.936000 audit[4940]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd8b982700 a2=94 a3=ffff items=0 ppid=4468 pid=4940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.936000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 14:15:50.936000 audit: BPF prog-id=245 op=LOAD Dec 16 14:15:50.936000 audit[4940]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd8b982740 a2=94 a3=7ffd8b982920 items=0 ppid=4468 pid=4940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.936000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 14:15:50.936000 audit: BPF prog-id=245 op=UNLOAD Dec 16 14:15:50.936000 audit[4940]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd8b982740 a2=94 a3=7ffd8b982920 items=0 ppid=4468 pid=4940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:50.936000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 14:15:51.037481 systemd-networkd[1572]: vxlan.calico: Link UP Dec 16 14:15:51.037492 systemd-networkd[1572]: vxlan.calico: Gained carrier Dec 16 14:15:51.066661 containerd[1683]: time="2025-12-16T14:15:51.066374936Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:15:51.069566 containerd[1683]: time="2025-12-16T14:15:51.069430396Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 14:15:51.072638 containerd[1683]: time="2025-12-16T14:15:51.071246618Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 14:15:51.077796 kubelet[2975]: E1216 14:15:51.077455 2975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 14:15:51.079588 kubelet[2975]: E1216 14:15:51.078899 2975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 14:15:51.079963 kubelet[2975]: E1216 14:15:51.079309 2975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fbb22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66659c8785-q7tzf_calico-apiserver(8ce9819e-d54a-4fa9-97c3-d8a0be2ecea0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 14:15:51.081070 containerd[1683]: time="2025-12-16T14:15:51.081014267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 14:15:51.093542 kubelet[2975]: E1216 14:15:51.092644 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66659c8785-q7tzf" podUID="8ce9819e-d54a-4fa9-97c3-d8a0be2ecea0" Dec 16 14:15:51.104000 audit: BPF prog-id=246 op=LOAD Dec 16 14:15:51.104000 audit[4964]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdb00774e0 a2=98 a3=20 items=0 ppid=4468 pid=4964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.104000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 14:15:51.104000 audit: BPF prog-id=246 op=UNLOAD Dec 16 14:15:51.104000 audit[4964]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffdb00774b0 a3=0 items=0 ppid=4468 pid=4964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.104000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 14:15:51.105000 audit: BPF prog-id=247 op=LOAD Dec 16 14:15:51.105000 audit[4964]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdb00772f0 a2=94 a3=54428f items=0 ppid=4468 pid=4964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.105000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 14:15:51.105000 audit: BPF prog-id=247 op=UNLOAD Dec 16 14:15:51.105000 audit[4964]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdb00772f0 a2=94 a3=54428f items=0 ppid=4468 pid=4964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.105000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 14:15:51.106000 audit: BPF prog-id=248 op=LOAD Dec 16 14:15:51.106000 audit[4964]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdb0077320 a2=94 a3=2 items=0 ppid=4468 pid=4964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.106000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 14:15:51.106000 audit: BPF prog-id=248 op=UNLOAD Dec 16 14:15:51.106000 audit[4964]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdb0077320 a2=0 a3=2 items=0 ppid=4468 pid=4964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.106000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 14:15:51.106000 audit: BPF prog-id=249 op=LOAD Dec 16 14:15:51.106000 audit[4964]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdb00770d0 a2=94 a3=4 items=0 ppid=4468 pid=4964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.106000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 14:15:51.106000 audit: BPF prog-id=249 op=UNLOAD Dec 16 14:15:51.106000 audit[4964]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffdb00770d0 a2=94 a3=4 items=0 ppid=4468 pid=4964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.106000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 14:15:51.106000 audit: BPF prog-id=250 op=LOAD Dec 16 14:15:51.106000 audit[4964]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdb00771d0 a2=94 a3=7ffdb0077350 items=0 ppid=4468 pid=4964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.106000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 14:15:51.107000 audit: BPF prog-id=250 op=UNLOAD Dec 16 14:15:51.107000 audit[4964]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffdb00771d0 a2=0 a3=7ffdb0077350 items=0 ppid=4468 pid=4964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.107000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 14:15:51.110000 audit: BPF prog-id=251 op=LOAD Dec 16 14:15:51.110000 audit[4964]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdb0076900 a2=94 a3=2 items=0 ppid=4468 pid=4964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.110000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 14:15:51.110000 audit: BPF prog-id=251 op=UNLOAD Dec 16 14:15:51.110000 audit[4964]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffdb0076900 a2=0 a3=2 items=0 ppid=4468 pid=4964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.110000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 14:15:51.110000 audit: BPF prog-id=252 op=LOAD Dec 16 14:15:51.110000 audit[4964]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdb0076a00 a2=94 a3=30 items=0 ppid=4468 pid=4964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.110000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 14:15:51.118000 audit: BPF prog-id=253 op=LOAD Dec 16 14:15:51.118000 audit[4968]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff97e5a700 a2=98 a3=0 items=0 ppid=4468 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.118000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:15:51.118000 audit: BPF prog-id=253 op=UNLOAD Dec 16 14:15:51.118000 audit[4968]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff97e5a6d0 a3=0 items=0 ppid=4468 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.118000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:15:51.118000 audit: BPF prog-id=254 op=LOAD Dec 16 14:15:51.118000 audit[4968]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff97e5a4f0 a2=94 a3=54428f items=0 ppid=4468 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.118000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:15:51.118000 audit: BPF prog-id=254 op=UNLOAD Dec 16 14:15:51.118000 audit[4968]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff97e5a4f0 a2=94 a3=54428f items=0 ppid=4468 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.118000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:15:51.118000 audit: BPF prog-id=255 op=LOAD Dec 16 14:15:51.118000 audit[4968]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff97e5a520 a2=94 a3=2 items=0 ppid=4468 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.118000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:15:51.118000 audit: BPF prog-id=255 op=UNLOAD Dec 16 14:15:51.118000 audit[4968]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff97e5a520 a2=0 a3=2 items=0 ppid=4468 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.118000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:15:51.321997 kubelet[2975]: E1216 14:15:51.319793 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8c8c88d5b-gxh2p" podUID="5790375d-68f1-4555-984f-974084235d42" Dec 16 14:15:51.322622 kubelet[2975]: E1216 14:15:51.322565 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66659c8785-q7tzf" podUID="8ce9819e-d54a-4fa9-97c3-d8a0be2ecea0" Dec 16 14:15:51.324514 kubelet[2975]: E1216 14:15:51.324477 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66659c8785-m7865" podUID="6e75d316-f851-4233-b053-cd9dc148b92b" Dec 16 14:15:51.415795 containerd[1683]: time="2025-12-16T14:15:51.415447301Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:15:51.419289 containerd[1683]: time="2025-12-16T14:15:51.419002985Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 14:15:51.419289 containerd[1683]: time="2025-12-16T14:15:51.419162436Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 14:15:51.419821 kubelet[2975]: E1216 14:15:51.419759 2975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 14:15:51.419917 kubelet[2975]: E1216 14:15:51.419837 2975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 14:15:51.420113 kubelet[2975]: E1216 14:15:51.420037 2975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wbf5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7b4555d8df-7md4r_calico-system(32682285-0749-4677-aeaa-b30aca23774e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 14:15:51.421710 kubelet[2975]: E1216 14:15:51.421270 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b4555d8df-7md4r" podUID="32682285-0749-4677-aeaa-b30aca23774e" Dec 16 14:15:51.503000 audit: BPF prog-id=256 op=LOAD Dec 16 14:15:51.503000 audit[4968]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff97e5a3e0 a2=94 a3=1 items=0 ppid=4468 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.503000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:15:51.503000 audit: BPF prog-id=256 op=UNLOAD Dec 16 14:15:51.503000 audit[4968]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff97e5a3e0 a2=94 a3=1 items=0 ppid=4468 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.503000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:15:51.518000 audit: BPF prog-id=257 op=LOAD Dec 16 14:15:51.518000 audit[4968]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff97e5a3d0 a2=94 a3=4 items=0 ppid=4468 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.518000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:15:51.519000 audit: BPF prog-id=257 op=UNLOAD Dec 16 14:15:51.519000 audit[4968]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff97e5a3d0 a2=0 a3=4 items=0 ppid=4468 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.519000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:15:51.519000 audit: BPF prog-id=258 op=LOAD Dec 16 14:15:51.519000 audit[4968]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff97e5a230 a2=94 a3=5 items=0 ppid=4468 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.519000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:15:51.519000 audit: BPF prog-id=258 op=UNLOAD Dec 16 14:15:51.519000 audit[4968]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff97e5a230 a2=0 a3=5 items=0 ppid=4468 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.519000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:15:51.519000 audit: BPF prog-id=259 op=LOAD Dec 16 14:15:51.519000 audit[4968]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff97e5a450 a2=94 a3=6 items=0 ppid=4468 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.519000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:15:51.520000 audit: BPF prog-id=259 op=UNLOAD Dec 16 14:15:51.520000 audit[4968]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff97e5a450 a2=0 a3=6 items=0 ppid=4468 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.520000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:15:51.520000 audit: BPF prog-id=260 op=LOAD Dec 16 14:15:51.520000 audit[4968]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff97e59c00 a2=94 a3=88 items=0 ppid=4468 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.520000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:15:51.520000 audit: BPF prog-id=261 op=LOAD Dec 16 14:15:51.520000 audit[4968]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff97e59a80 a2=94 a3=2 items=0 ppid=4468 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.520000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:15:51.520000 audit: BPF prog-id=261 op=UNLOAD Dec 16 14:15:51.520000 audit[4968]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff97e59ab0 a2=0 a3=7fff97e59bb0 items=0 ppid=4468 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.520000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:15:51.521000 audit: BPF prog-id=260 op=UNLOAD Dec 16 14:15:51.521000 audit[4968]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=2264cd10 a2=0 a3=2d626bb4a29fe524 items=0 ppid=4468 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.521000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:15:51.531000 audit: BPF prog-id=252 op=UNLOAD Dec 16 14:15:51.531000 audit[4468]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000ca1e80 a2=0 a3=0 items=0 ppid=4350 pid=4468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.531000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 14:15:51.643000 audit[5004]: NETFILTER_CFG table=filter:125 family=2 entries=14 op=nft_register_rule pid=5004 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:51.643000 audit[5004]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdc4ec10f0 a2=0 a3=7ffdc4ec10dc items=0 ppid=3086 pid=5004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.643000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:15:51.649000 audit[5004]: NETFILTER_CFG table=nat:126 family=2 entries=20 op=nft_register_rule pid=5004 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:51.649000 audit[5004]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdc4ec10f0 a2=0 a3=7ffdc4ec10dc items=0 ppid=3086 pid=5004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.649000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:15:51.666000 audit[5009]: NETFILTER_CFG table=mangle:127 family=2 entries=16 op=nft_register_chain pid=5009 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 14:15:51.666000 audit[5009]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffd23cfc2d0 a2=0 a3=7ffd23cfc2bc items=0 ppid=4468 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.666000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 14:15:51.674000 audit[5012]: NETFILTER_CFG table=nat:128 family=2 entries=15 op=nft_register_chain pid=5012 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 14:15:51.674000 audit[5012]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fff488d2250 a2=0 a3=7fff488d223c items=0 ppid=4468 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.674000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 14:15:51.685000 audit[5011]: NETFILTER_CFG table=raw:129 family=2 entries=21 op=nft_register_chain pid=5011 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 14:15:51.685000 audit[5011]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fff2fff7e80 a2=0 a3=7fff2fff7e6c items=0 ppid=4468 pid=5011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.685000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 14:15:51.698000 audit[5014]: NETFILTER_CFG table=filter:130 family=2 entries=321 op=nft_register_chain pid=5014 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 14:15:51.698000 audit[5014]: SYSCALL arch=c000003e syscall=46 success=yes exit=190616 a0=3 a1=7fffca890e60 a2=0 a3=7fffca890e4c items=0 ppid=4468 pid=5014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:51.698000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 14:15:52.324208 kubelet[2975]: E1216 14:15:52.324129 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b4555d8df-7md4r" podUID="32682285-0749-4677-aeaa-b30aca23774e" Dec 16 14:15:52.365395 systemd-networkd[1572]: vxlan.calico: Gained IPv6LL Dec 16 14:15:52.678000 audit[5027]: NETFILTER_CFG table=filter:131 family=2 entries=14 op=nft_register_rule pid=5027 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:52.678000 audit[5027]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc43d02a20 a2=0 a3=7ffc43d02a0c items=0 ppid=3086 pid=5027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:52.678000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:15:52.683000 audit[5027]: NETFILTER_CFG table=nat:132 family=2 entries=20 op=nft_register_rule pid=5027 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:15:52.683000 audit[5027]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc43d02a20 a2=0 a3=7ffc43d02a0c items=0 ppid=3086 pid=5027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:15:52.683000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:16:02.556482 containerd[1683]: time="2025-12-16T14:16:02.556258932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 14:16:02.876846 containerd[1683]: time="2025-12-16T14:16:02.876569618Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:16:02.878047 containerd[1683]: time="2025-12-16T14:16:02.877954751Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 14:16:02.878047 containerd[1683]: time="2025-12-16T14:16:02.878002499Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 14:16:02.878549 kubelet[2975]: E1216 14:16:02.878325 2975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 14:16:02.878549 kubelet[2975]: E1216 14:16:02.878413 2975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 14:16:02.879743 kubelet[2975]: E1216 14:16:02.879441 2975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tvzdp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tfzg7_calico-system(1248d2d9-77a6-4a9d-9b93-4af871a2edbf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 14:16:02.882339 containerd[1683]: time="2025-12-16T14:16:02.882257852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 14:16:03.187957 containerd[1683]: time="2025-12-16T14:16:03.187535971Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:16:03.192210 containerd[1683]: time="2025-12-16T14:16:03.190397529Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 14:16:03.192210 containerd[1683]: time="2025-12-16T14:16:03.190610125Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 14:16:03.192363 kubelet[2975]: E1216 14:16:03.190852 2975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 14:16:03.192363 kubelet[2975]: E1216 14:16:03.190961 2975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 14:16:03.192363 kubelet[2975]: E1216 14:16:03.191336 2975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tvzdp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tfzg7_calico-system(1248d2d9-77a6-4a9d-9b93-4af871a2edbf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 14:16:03.193137 kubelet[2975]: E1216 14:16:03.192985 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tfzg7" podUID="1248d2d9-77a6-4a9d-9b93-4af871a2edbf" Dec 16 14:16:03.559952 containerd[1683]: time="2025-12-16T14:16:03.559892317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 14:16:03.866139 containerd[1683]: time="2025-12-16T14:16:03.865903445Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:16:03.867885 containerd[1683]: time="2025-12-16T14:16:03.867810365Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 14:16:03.867967 containerd[1683]: time="2025-12-16T14:16:03.867940774Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 14:16:03.868355 kubelet[2975]: E1216 14:16:03.868272 2975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 14:16:03.868625 kubelet[2975]: E1216 14:16:03.868527 2975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 14:16:03.869275 kubelet[2975]: E1216 14:16:03.869014 2975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fbb22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66659c8785-q7tzf_calico-apiserver(8ce9819e-d54a-4fa9-97c3-d8a0be2ecea0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 14:16:03.869793 containerd[1683]: time="2025-12-16T14:16:03.869468588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 14:16:03.870764 kubelet[2975]: E1216 14:16:03.870620 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66659c8785-q7tzf" podUID="8ce9819e-d54a-4fa9-97c3-d8a0be2ecea0" Dec 16 14:16:04.187606 containerd[1683]: time="2025-12-16T14:16:04.187445977Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:16:04.190971 containerd[1683]: time="2025-12-16T14:16:04.190459561Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 14:16:04.190971 containerd[1683]: time="2025-12-16T14:16:04.190566721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 14:16:04.191116 kubelet[2975]: E1216 14:16:04.190859 2975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 14:16:04.191116 kubelet[2975]: E1216 14:16:04.190967 2975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 14:16:04.193339 kubelet[2975]: E1216 14:16:04.191284 2975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tb8l9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-2m9mr_calico-system(86b288c9-63f1-4f44-8c9e-eb5a65f83789): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 14:16:04.193339 kubelet[2975]: E1216 14:16:04.192529 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2m9mr" podUID="86b288c9-63f1-4f44-8c9e-eb5a65f83789" Dec 16 14:16:04.555924 containerd[1683]: time="2025-12-16T14:16:04.555860474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 14:16:04.892967 containerd[1683]: time="2025-12-16T14:16:04.892735826Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:16:04.894528 containerd[1683]: time="2025-12-16T14:16:04.894388273Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 14:16:04.894528 containerd[1683]: time="2025-12-16T14:16:04.894470555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 14:16:04.894816 kubelet[2975]: E1216 14:16:04.894732 2975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 14:16:04.895043 kubelet[2975]: E1216 14:16:04.894827 2975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 14:16:04.895130 kubelet[2975]: E1216 14:16:04.895059 2975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dkmx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66659c8785-m7865_calico-apiserver(6e75d316-f851-4233-b053-cd9dc148b92b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 14:16:04.896637 kubelet[2975]: E1216 14:16:04.896575 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66659c8785-m7865" podUID="6e75d316-f851-4233-b053-cd9dc148b92b" Dec 16 14:16:06.555465 containerd[1683]: time="2025-12-16T14:16:06.555110962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 14:16:06.880501 containerd[1683]: time="2025-12-16T14:16:06.880296796Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:16:06.881472 containerd[1683]: time="2025-12-16T14:16:06.881430010Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 14:16:06.881545 containerd[1683]: time="2025-12-16T14:16:06.881527439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 14:16:06.881904 kubelet[2975]: E1216 14:16:06.881850 2975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 14:16:06.884282 kubelet[2975]: E1216 14:16:06.882281 2975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 14:16:06.884282 kubelet[2975]: E1216 14:16:06.882535 2975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hwp5p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8c8c88d5b-gxh2p_calico-system(5790375d-68f1-4555-984f-974084235d42): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 14:16:06.884701 kubelet[2975]: E1216 14:16:06.884650 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8c8c88d5b-gxh2p" podUID="5790375d-68f1-4555-984f-974084235d42" Dec 16 14:16:07.557302 containerd[1683]: time="2025-12-16T14:16:07.557203428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 14:16:07.889728 containerd[1683]: time="2025-12-16T14:16:07.889259624Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:16:07.890591 containerd[1683]: time="2025-12-16T14:16:07.890514394Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 14:16:07.890591 containerd[1683]: time="2025-12-16T14:16:07.890555802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 14:16:07.890951 kubelet[2975]: E1216 14:16:07.890894 2975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 14:16:07.891829 kubelet[2975]: E1216 14:16:07.890980 2975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 14:16:07.891829 kubelet[2975]: E1216 14:16:07.891292 2975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ec4a891a20f447ee9adfa1d6478964a6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wbf5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7b4555d8df-7md4r_calico-system(32682285-0749-4677-aeaa-b30aca23774e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 14:16:07.894338 containerd[1683]: time="2025-12-16T14:16:07.894288736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 14:16:08.212859 containerd[1683]: time="2025-12-16T14:16:08.212747851Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:16:08.214037 containerd[1683]: time="2025-12-16T14:16:08.213991064Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 14:16:08.214321 containerd[1683]: time="2025-12-16T14:16:08.214020697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 14:16:08.214489 kubelet[2975]: E1216 14:16:08.214432 2975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 14:16:08.214572 kubelet[2975]: E1216 14:16:08.214509 2975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 14:16:08.214750 kubelet[2975]: E1216 14:16:08.214694 2975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wbf5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7b4555d8df-7md4r_calico-system(32682285-0749-4677-aeaa-b30aca23774e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 14:16:08.216371 kubelet[2975]: E1216 14:16:08.216266 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b4555d8df-7md4r" podUID="32682285-0749-4677-aeaa-b30aca23774e" Dec 16 14:16:14.555493 kubelet[2975]: E1216 14:16:14.555319 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2m9mr" podUID="86b288c9-63f1-4f44-8c9e-eb5a65f83789" Dec 16 14:16:16.555502 kubelet[2975]: E1216 14:16:16.555427 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66659c8785-m7865" podUID="6e75d316-f851-4233-b053-cd9dc148b92b" Dec 16 14:16:18.557290 kubelet[2975]: E1216 14:16:18.557216 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66659c8785-q7tzf" podUID="8ce9819e-d54a-4fa9-97c3-d8a0be2ecea0" Dec 16 14:16:18.559463 kubelet[2975]: E1216 14:16:18.558931 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tfzg7" podUID="1248d2d9-77a6-4a9d-9b93-4af871a2edbf" Dec 16 14:16:20.556212 kubelet[2975]: E1216 14:16:20.556055 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8c8c88d5b-gxh2p" podUID="5790375d-68f1-4555-984f-974084235d42" Dec 16 14:16:22.557833 kubelet[2975]: E1216 14:16:22.557664 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b4555d8df-7md4r" podUID="32682285-0749-4677-aeaa-b30aca23774e" Dec 16 14:16:26.559483 containerd[1683]: time="2025-12-16T14:16:26.559104091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 14:16:26.886129 containerd[1683]: time="2025-12-16T14:16:26.885822978Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:16:26.889701 containerd[1683]: time="2025-12-16T14:16:26.889599204Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 14:16:26.889701 containerd[1683]: time="2025-12-16T14:16:26.889661295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 14:16:26.890118 kubelet[2975]: E1216 14:16:26.889932 2975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 14:16:26.890118 kubelet[2975]: E1216 14:16:26.890011 2975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 14:16:26.891153 kubelet[2975]: E1216 14:16:26.890283 2975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tb8l9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-2m9mr_calico-system(86b288c9-63f1-4f44-8c9e-eb5a65f83789): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 14:16:26.891734 kubelet[2975]: E1216 14:16:26.891556 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2m9mr" podUID="86b288c9-63f1-4f44-8c9e-eb5a65f83789" Dec 16 14:16:30.111567 systemd[1]: Started sshd@9-10.230.52.194:22-139.178.89.65:33810.service - OpenSSH per-connection server daemon (139.178.89.65:33810). Dec 16 14:16:30.127683 kernel: kauditd_printk_skb: 212 callbacks suppressed Dec 16 14:16:30.127830 kernel: audit: type=1130 audit(1765894590.110:744): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.52.194:22-139.178.89.65:33810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:16:30.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.52.194:22-139.178.89.65:33810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:16:30.559877 containerd[1683]: time="2025-12-16T14:16:30.558747445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 14:16:30.877965 containerd[1683]: time="2025-12-16T14:16:30.877481725Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:16:30.878736 containerd[1683]: time="2025-12-16T14:16:30.878621646Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 14:16:30.878822 containerd[1683]: time="2025-12-16T14:16:30.878733319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 14:16:30.890653 kubelet[2975]: E1216 14:16:30.890531 2975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 14:16:30.892392 kubelet[2975]: E1216 14:16:30.890689 2975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 14:16:30.892392 kubelet[2975]: E1216 14:16:30.891033 2975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fbb22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66659c8785-q7tzf_calico-apiserver(8ce9819e-d54a-4fa9-97c3-d8a0be2ecea0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 14:16:30.892392 kubelet[2975]: E1216 14:16:30.892338 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66659c8785-q7tzf" podUID="8ce9819e-d54a-4fa9-97c3-d8a0be2ecea0" Dec 16 14:16:30.966000 audit[5088]: USER_ACCT pid=5088 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:30.983852 kernel: audit: type=1101 audit(1765894590.966:745): pid=5088 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:30.983973 kernel: audit: type=1103 audit(1765894590.978:746): pid=5088 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:30.978000 audit[5088]: CRED_ACQ pid=5088 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:30.985866 sshd[5088]: Accepted publickey for core from 139.178.89.65 port 33810 ssh2: RSA SHA256:HFgv0LODQ11hKhkpz13ZhYSt64haPZ/HUbDL52pAA0k Dec 16 14:16:30.986701 kernel: audit: type=1006 audit(1765894590.978:747): pid=5088 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 16 14:16:30.983142 sshd-session[5088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:16:30.978000 audit[5088]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd42ce48d0 a2=3 a3=0 items=0 ppid=1 pid=5088 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:16:30.989860 kernel: audit: type=1300 audit(1765894590.978:747): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd42ce48d0 a2=3 a3=0 items=0 ppid=1 pid=5088 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:16:30.978000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:16:30.994559 kernel: audit: type=1327 audit(1765894590.978:747): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:16:31.003312 systemd-logind[1646]: New session 12 of user core. Dec 16 14:16:31.017751 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 14:16:31.024000 audit[5088]: USER_START pid=5088 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:31.032627 kernel: audit: type=1105 audit(1765894591.024:748): pid=5088 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:31.031000 audit[5091]: CRED_ACQ pid=5091 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:31.038214 kernel: audit: type=1103 audit(1765894591.031:749): pid=5091 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:31.567767 containerd[1683]: time="2025-12-16T14:16:31.567649287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 14:16:31.878852 containerd[1683]: time="2025-12-16T14:16:31.878439568Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:16:31.882548 containerd[1683]: time="2025-12-16T14:16:31.882253292Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 14:16:31.882548 containerd[1683]: time="2025-12-16T14:16:31.882394564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 14:16:31.882850 kubelet[2975]: E1216 14:16:31.882589 2975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 14:16:31.882850 kubelet[2975]: E1216 14:16:31.882665 2975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 14:16:31.883013 kubelet[2975]: E1216 14:16:31.882908 2975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dkmx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66659c8785-m7865_calico-apiserver(6e75d316-f851-4233-b053-cd9dc148b92b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 14:16:31.885476 kubelet[2975]: E1216 14:16:31.884378 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66659c8785-m7865" podUID="6e75d316-f851-4233-b053-cd9dc148b92b" Dec 16 14:16:32.160341 sshd[5091]: Connection closed by 139.178.89.65 port 33810 Dec 16 14:16:32.160052 sshd-session[5088]: pam_unix(sshd:session): session closed for user core Dec 16 14:16:32.183261 kernel: audit: type=1106 audit(1765894592.170:750): pid=5088 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:32.170000 audit[5088]: USER_END pid=5088 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:32.194985 kernel: audit: type=1104 audit(1765894592.171:751): pid=5088 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:32.171000 audit[5088]: CRED_DISP pid=5088 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:32.190165 systemd[1]: sshd@9-10.230.52.194:22-139.178.89.65:33810.service: Deactivated successfully. Dec 16 14:16:32.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.52.194:22-139.178.89.65:33810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:16:32.198128 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 14:16:32.200922 systemd-logind[1646]: Session 12 logged out. Waiting for processes to exit. Dec 16 14:16:32.204156 systemd-logind[1646]: Removed session 12. Dec 16 14:16:32.557223 containerd[1683]: time="2025-12-16T14:16:32.556699037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 14:16:32.896723 containerd[1683]: time="2025-12-16T14:16:32.896493352Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:16:32.899080 containerd[1683]: time="2025-12-16T14:16:32.898648088Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 14:16:32.899080 containerd[1683]: time="2025-12-16T14:16:32.898757616Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 14:16:32.899428 kubelet[2975]: E1216 14:16:32.899254 2975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 14:16:32.900003 kubelet[2975]: E1216 14:16:32.899456 2975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 14:16:32.900003 kubelet[2975]: E1216 14:16:32.899940 2975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hwp5p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8c8c88d5b-gxh2p_calico-system(5790375d-68f1-4555-984f-974084235d42): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 14:16:32.901666 containerd[1683]: time="2025-12-16T14:16:32.901617918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 14:16:32.902632 kubelet[2975]: E1216 14:16:32.902457 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8c8c88d5b-gxh2p" podUID="5790375d-68f1-4555-984f-974084235d42" Dec 16 14:16:33.226222 containerd[1683]: time="2025-12-16T14:16:33.226109929Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:16:33.227324 containerd[1683]: time="2025-12-16T14:16:33.227275106Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 14:16:33.227420 containerd[1683]: time="2025-12-16T14:16:33.227373916Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 14:16:33.227818 kubelet[2975]: E1216 14:16:33.227727 2975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 14:16:33.227818 kubelet[2975]: E1216 14:16:33.227813 2975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 14:16:33.230136 kubelet[2975]: E1216 14:16:33.230042 2975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tvzdp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tfzg7_calico-system(1248d2d9-77a6-4a9d-9b93-4af871a2edbf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 14:16:33.238584 containerd[1683]: time="2025-12-16T14:16:33.238441174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 14:16:33.558641 containerd[1683]: time="2025-12-16T14:16:33.557959005Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:16:33.560852 containerd[1683]: time="2025-12-16T14:16:33.560782048Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 14:16:33.561053 containerd[1683]: time="2025-12-16T14:16:33.560901770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 14:16:33.561121 kubelet[2975]: E1216 14:16:33.561038 2975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 14:16:33.561121 kubelet[2975]: E1216 14:16:33.561095 2975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 14:16:33.561350 kubelet[2975]: E1216 14:16:33.561294 2975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tvzdp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tfzg7_calico-system(1248d2d9-77a6-4a9d-9b93-4af871a2edbf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 14:16:33.563002 kubelet[2975]: E1216 14:16:33.562953 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tfzg7" podUID="1248d2d9-77a6-4a9d-9b93-4af871a2edbf" Dec 16 14:16:34.557127 containerd[1683]: time="2025-12-16T14:16:34.556760817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 14:16:34.883334 containerd[1683]: time="2025-12-16T14:16:34.883135628Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:16:34.886625 containerd[1683]: time="2025-12-16T14:16:34.886572940Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 14:16:34.886723 containerd[1683]: time="2025-12-16T14:16:34.886682579Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 14:16:34.887064 kubelet[2975]: E1216 14:16:34.886983 2975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 14:16:34.889017 kubelet[2975]: E1216 14:16:34.888234 2975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 14:16:34.889017 kubelet[2975]: E1216 14:16:34.888440 2975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ec4a891a20f447ee9adfa1d6478964a6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wbf5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7b4555d8df-7md4r_calico-system(32682285-0749-4677-aeaa-b30aca23774e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 14:16:34.892436 containerd[1683]: time="2025-12-16T14:16:34.892388519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 14:16:35.202248 containerd[1683]: time="2025-12-16T14:16:35.201920856Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:16:35.205134 containerd[1683]: time="2025-12-16T14:16:35.204957290Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 14:16:35.205134 containerd[1683]: time="2025-12-16T14:16:35.205004849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 14:16:35.208203 kubelet[2975]: E1216 14:16:35.206310 2975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 14:16:35.208203 kubelet[2975]: E1216 14:16:35.206400 2975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 14:16:35.208203 kubelet[2975]: E1216 14:16:35.206622 2975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wbf5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7b4555d8df-7md4r_calico-system(32682285-0749-4677-aeaa-b30aca23774e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 14:16:35.208595 kubelet[2975]: E1216 14:16:35.208333 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b4555d8df-7md4r" podUID="32682285-0749-4677-aeaa-b30aca23774e" Dec 16 14:16:37.332688 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 14:16:37.332888 kernel: audit: type=1130 audit(1765894597.321:753): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.52.194:22-139.178.89.65:38104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:16:37.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.52.194:22-139.178.89.65:38104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:16:37.322689 systemd[1]: Started sshd@10-10.230.52.194:22-139.178.89.65:38104.service - OpenSSH per-connection server daemon (139.178.89.65:38104). Dec 16 14:16:37.556630 kubelet[2975]: E1216 14:16:37.556568 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2m9mr" podUID="86b288c9-63f1-4f44-8c9e-eb5a65f83789" Dec 16 14:16:38.147000 audit[5116]: USER_ACCT pid=5116 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:38.155162 sshd-session[5116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:16:38.157433 sshd[5116]: Accepted publickey for core from 139.178.89.65 port 38104 ssh2: RSA SHA256:HFgv0LODQ11hKhkpz13ZhYSt64haPZ/HUbDL52pAA0k Dec 16 14:16:38.159577 kernel: audit: type=1101 audit(1765894598.147:754): pid=5116 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:38.149000 audit[5116]: CRED_ACQ pid=5116 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:38.165203 kernel: audit: type=1103 audit(1765894598.149:755): pid=5116 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:38.149000 audit[5116]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffbd6cb050 a2=3 a3=0 items=0 ppid=1 pid=5116 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:16:38.170310 kernel: audit: type=1006 audit(1765894598.149:756): pid=5116 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 16 14:16:38.170382 kernel: audit: type=1300 audit(1765894598.149:756): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffbd6cb050 a2=3 a3=0 items=0 ppid=1 pid=5116 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:16:38.149000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:16:38.177213 kernel: audit: type=1327 audit(1765894598.149:756): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:16:38.182017 systemd-logind[1646]: New session 13 of user core. Dec 16 14:16:38.188455 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 14:16:38.195000 audit[5116]: USER_START pid=5116 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:38.203220 kernel: audit: type=1105 audit(1765894598.195:757): pid=5116 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:38.203000 audit[5120]: CRED_ACQ pid=5120 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:38.211219 kernel: audit: type=1103 audit(1765894598.203:758): pid=5120 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:38.780316 sshd[5120]: Connection closed by 139.178.89.65 port 38104 Dec 16 14:16:38.781304 sshd-session[5116]: pam_unix(sshd:session): session closed for user core Dec 16 14:16:38.784000 audit[5116]: USER_END pid=5116 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:38.796229 kernel: audit: type=1106 audit(1765894598.784:759): pid=5116 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:38.796271 systemd[1]: sshd@10-10.230.52.194:22-139.178.89.65:38104.service: Deactivated successfully. Dec 16 14:16:38.784000 audit[5116]: CRED_DISP pid=5116 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:38.805224 kernel: audit: type=1104 audit(1765894598.784:760): pid=5116 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:38.806337 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 14:16:38.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.52.194:22-139.178.89.65:38104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:16:38.810016 systemd-logind[1646]: Session 13 logged out. Waiting for processes to exit. Dec 16 14:16:38.814321 systemd-logind[1646]: Removed session 13. Dec 16 14:16:42.556769 kubelet[2975]: E1216 14:16:42.556695 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66659c8785-m7865" podUID="6e75d316-f851-4233-b053-cd9dc148b92b" Dec 16 14:16:43.939945 systemd[1]: Started sshd@11-10.230.52.194:22-139.178.89.65:38126.service - OpenSSH per-connection server daemon (139.178.89.65:38126). Dec 16 14:16:43.944526 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 14:16:43.944596 kernel: audit: type=1130 audit(1765894603.939:762): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.52.194:22-139.178.89.65:38126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:16:43.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.52.194:22-139.178.89.65:38126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:16:44.556211 kubelet[2975]: E1216 14:16:44.556120 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66659c8785-q7tzf" podUID="8ce9819e-d54a-4fa9-97c3-d8a0be2ecea0" Dec 16 14:16:44.777000 audit[5135]: USER_ACCT pid=5135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:44.781362 sshd[5135]: Accepted publickey for core from 139.178.89.65 port 38126 ssh2: RSA SHA256:HFgv0LODQ11hKhkpz13ZhYSt64haPZ/HUbDL52pAA0k Dec 16 14:16:44.783576 sshd-session[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:16:44.785229 kernel: audit: type=1101 audit(1765894604.777:763): pid=5135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:44.782000 audit[5135]: CRED_ACQ pid=5135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:44.795252 kernel: audit: type=1103 audit(1765894604.782:764): pid=5135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:44.799542 kernel: audit: type=1006 audit(1765894604.782:765): pid=5135 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 16 14:16:44.799994 systemd-logind[1646]: New session 14 of user core. Dec 16 14:16:44.782000 audit[5135]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed65d7040 a2=3 a3=0 items=0 ppid=1 pid=5135 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:16:44.809533 kernel: audit: type=1300 audit(1765894604.782:765): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed65d7040 a2=3 a3=0 items=0 ppid=1 pid=5135 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:16:44.813579 kernel: audit: type=1327 audit(1765894604.782:765): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:16:44.782000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:16:44.812699 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 14:16:44.820000 audit[5135]: USER_START pid=5135 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:44.827211 kernel: audit: type=1105 audit(1765894604.820:766): pid=5135 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:44.830000 audit[5138]: CRED_ACQ pid=5138 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:44.836218 kernel: audit: type=1103 audit(1765894604.830:767): pid=5138 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:45.446769 sshd[5138]: Connection closed by 139.178.89.65 port 38126 Dec 16 14:16:45.453695 sshd-session[5135]: pam_unix(sshd:session): session closed for user core Dec 16 14:16:45.456000 audit[5135]: USER_END pid=5135 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:45.456000 audit[5135]: CRED_DISP pid=5135 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:45.461747 systemd[1]: sshd@11-10.230.52.194:22-139.178.89.65:38126.service: Deactivated successfully. Dec 16 14:16:45.465233 kernel: audit: type=1106 audit(1765894605.456:768): pid=5135 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:45.465375 kernel: audit: type=1104 audit(1765894605.456:769): pid=5135 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:45.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.52.194:22-139.178.89.65:38126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:16:45.469731 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 14:16:45.473639 systemd-logind[1646]: Session 14 logged out. Waiting for processes to exit. Dec 16 14:16:45.475569 systemd-logind[1646]: Removed session 14. Dec 16 14:16:46.556544 kubelet[2975]: E1216 14:16:46.556197 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8c8c88d5b-gxh2p" podUID="5790375d-68f1-4555-984f-974084235d42" Dec 16 14:16:46.558653 kubelet[2975]: E1216 14:16:46.557501 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tfzg7" podUID="1248d2d9-77a6-4a9d-9b93-4af871a2edbf" Dec 16 14:16:50.557560 kubelet[2975]: E1216 14:16:50.557258 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b4555d8df-7md4r" podUID="32682285-0749-4677-aeaa-b30aca23774e" Dec 16 14:16:50.617691 systemd[1]: Started sshd@12-10.230.52.194:22-139.178.89.65:40398.service - OpenSSH per-connection server daemon (139.178.89.65:40398). Dec 16 14:16:50.627382 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 14:16:50.633011 kernel: audit: type=1130 audit(1765894610.617:771): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.52.194:22-139.178.89.65:40398 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:16:50.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.52.194:22-139.178.89.65:40398 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:16:51.481000 audit[5180]: USER_ACCT pid=5180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:51.493414 kernel: audit: type=1101 audit(1765894611.481:772): pid=5180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:51.493573 sshd[5180]: Accepted publickey for core from 139.178.89.65 port 40398 ssh2: RSA SHA256:HFgv0LODQ11hKhkpz13ZhYSt64haPZ/HUbDL52pAA0k Dec 16 14:16:51.494000 audit[5180]: CRED_ACQ pid=5180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:51.497013 sshd-session[5180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:16:51.502548 kernel: audit: type=1103 audit(1765894611.494:773): pid=5180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:51.502699 kernel: audit: type=1006 audit(1765894611.494:774): pid=5180 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 16 14:16:51.494000 audit[5180]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd692fc0e0 a2=3 a3=0 items=0 ppid=1 pid=5180 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:16:51.508572 kernel: audit: type=1300 audit(1765894611.494:774): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd692fc0e0 a2=3 a3=0 items=0 ppid=1 pid=5180 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:16:51.494000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:16:51.519199 kernel: audit: type=1327 audit(1765894611.494:774): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:16:51.521507 systemd-logind[1646]: New session 15 of user core. Dec 16 14:16:51.530564 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 14:16:51.536000 audit[5180]: USER_START pid=5180 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:51.544209 kernel: audit: type=1105 audit(1765894611.536:775): pid=5180 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:51.547000 audit[5183]: CRED_ACQ pid=5183 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:51.554215 kernel: audit: type=1103 audit(1765894611.547:776): pid=5183 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:52.126616 sshd[5183]: Connection closed by 139.178.89.65 port 40398 Dec 16 14:16:52.128418 sshd-session[5180]: pam_unix(sshd:session): session closed for user core Dec 16 14:16:52.132000 audit[5180]: USER_END pid=5180 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:52.143217 kernel: audit: type=1106 audit(1765894612.132:777): pid=5180 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:52.139000 audit[5180]: CRED_DISP pid=5180 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:52.149255 kernel: audit: type=1104 audit(1765894612.139:778): pid=5180 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:52.149656 systemd[1]: sshd@12-10.230.52.194:22-139.178.89.65:40398.service: Deactivated successfully. Dec 16 14:16:52.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.52.194:22-139.178.89.65:40398 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:16:52.156229 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 14:16:52.160069 systemd-logind[1646]: Session 15 logged out. Waiting for processes to exit. Dec 16 14:16:52.163929 systemd-logind[1646]: Removed session 15. Dec 16 14:16:52.293269 systemd[1]: Started sshd@13-10.230.52.194:22-139.178.89.65:40410.service - OpenSSH per-connection server daemon (139.178.89.65:40410). Dec 16 14:16:52.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.52.194:22-139.178.89.65:40410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:16:52.557691 kubelet[2975]: E1216 14:16:52.557368 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2m9mr" podUID="86b288c9-63f1-4f44-8c9e-eb5a65f83789" Dec 16 14:16:53.119000 audit[5197]: USER_ACCT pid=5197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:53.124140 sshd[5197]: Accepted publickey for core from 139.178.89.65 port 40410 ssh2: RSA SHA256:HFgv0LODQ11hKhkpz13ZhYSt64haPZ/HUbDL52pAA0k Dec 16 14:16:53.123000 audit[5197]: CRED_ACQ pid=5197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:53.123000 audit[5197]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc8e947fb0 a2=3 a3=0 items=0 ppid=1 pid=5197 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:16:53.123000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:16:53.125546 sshd-session[5197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:16:53.138017 systemd-logind[1646]: New session 16 of user core. Dec 16 14:16:53.147559 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 14:16:53.153000 audit[5197]: USER_START pid=5197 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:53.158000 audit[5200]: CRED_ACQ pid=5200 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:53.790390 sshd[5200]: Connection closed by 139.178.89.65 port 40410 Dec 16 14:16:53.791785 sshd-session[5197]: pam_unix(sshd:session): session closed for user core Dec 16 14:16:53.797000 audit[5197]: USER_END pid=5197 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:53.797000 audit[5197]: CRED_DISP pid=5197 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:53.805883 systemd[1]: sshd@13-10.230.52.194:22-139.178.89.65:40410.service: Deactivated successfully. Dec 16 14:16:53.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.52.194:22-139.178.89.65:40410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:16:53.810881 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 14:16:53.814057 systemd-logind[1646]: Session 16 logged out. Waiting for processes to exit. Dec 16 14:16:53.816805 systemd-logind[1646]: Removed session 16. Dec 16 14:16:53.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.52.194:22-139.178.89.65:40422 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:16:53.952610 systemd[1]: Started sshd@14-10.230.52.194:22-139.178.89.65:40422.service - OpenSSH per-connection server daemon (139.178.89.65:40422). Dec 16 14:16:54.799000 audit[5210]: USER_ACCT pid=5210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:54.801276 sshd[5210]: Accepted publickey for core from 139.178.89.65 port 40422 ssh2: RSA SHA256:HFgv0LODQ11hKhkpz13ZhYSt64haPZ/HUbDL52pAA0k Dec 16 14:16:54.802000 audit[5210]: CRED_ACQ pid=5210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:54.802000 audit[5210]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff9431a970 a2=3 a3=0 items=0 ppid=1 pid=5210 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:16:54.802000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:16:54.805575 sshd-session[5210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:16:54.819250 systemd-logind[1646]: New session 17 of user core. Dec 16 14:16:54.825478 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 14:16:54.830000 audit[5210]: USER_START pid=5210 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:54.834000 audit[5213]: CRED_ACQ pid=5213 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:55.389153 sshd[5213]: Connection closed by 139.178.89.65 port 40422 Dec 16 14:16:55.389756 sshd-session[5210]: pam_unix(sshd:session): session closed for user core Dec 16 14:16:55.392000 audit[5210]: USER_END pid=5210 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:55.393000 audit[5210]: CRED_DISP pid=5210 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:16:55.400706 systemd[1]: sshd@14-10.230.52.194:22-139.178.89.65:40422.service: Deactivated successfully. Dec 16 14:16:55.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.52.194:22-139.178.89.65:40422 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:16:55.405464 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 14:16:55.407439 systemd-logind[1646]: Session 17 logged out. Waiting for processes to exit. Dec 16 14:16:55.412828 systemd-logind[1646]: Removed session 17. Dec 16 14:16:55.558745 kubelet[2975]: E1216 14:16:55.558680 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66659c8785-q7tzf" podUID="8ce9819e-d54a-4fa9-97c3-d8a0be2ecea0" Dec 16 14:16:55.564591 kubelet[2975]: E1216 14:16:55.564522 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66659c8785-m7865" podUID="6e75d316-f851-4233-b053-cd9dc148b92b" Dec 16 14:17:00.559503 kubelet[2975]: E1216 14:17:00.558233 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tfzg7" podUID="1248d2d9-77a6-4a9d-9b93-4af871a2edbf" Dec 16 14:17:00.572681 systemd[1]: Started sshd@15-10.230.52.194:22-139.178.89.65:35860.service - OpenSSH per-connection server daemon (139.178.89.65:35860). Dec 16 14:17:00.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.52.194:22-139.178.89.65:35860 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:17:00.582174 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 14:17:00.582307 kernel: audit: type=1130 audit(1765894620.573:798): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.52.194:22-139.178.89.65:35860 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:17:01.414000 audit[5228]: USER_ACCT pid=5228 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:01.424428 kernel: audit: type=1101 audit(1765894621.414:799): pid=5228 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:01.424620 sshd[5228]: Accepted publickey for core from 139.178.89.65 port 35860 ssh2: RSA SHA256:HFgv0LODQ11hKhkpz13ZhYSt64haPZ/HUbDL52pAA0k Dec 16 14:17:01.426000 audit[5228]: CRED_ACQ pid=5228 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:01.433851 sshd-session[5228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:17:01.435546 kernel: audit: type=1103 audit(1765894621.426:800): pid=5228 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:01.435696 kernel: audit: type=1006 audit(1765894621.431:801): pid=5228 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 16 14:17:01.431000 audit[5228]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff97c90620 a2=3 a3=0 items=0 ppid=1 pid=5228 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:17:01.447234 kernel: audit: type=1300 audit(1765894621.431:801): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff97c90620 a2=3 a3=0 items=0 ppid=1 pid=5228 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:17:01.431000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:17:01.452228 kernel: audit: type=1327 audit(1765894621.431:801): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:17:01.453098 systemd-logind[1646]: New session 18 of user core. Dec 16 14:17:01.461525 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 14:17:01.467000 audit[5228]: USER_START pid=5228 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:01.475309 kernel: audit: type=1105 audit(1765894621.467:802): pid=5228 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:01.473000 audit[5237]: CRED_ACQ pid=5237 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:01.482220 kernel: audit: type=1103 audit(1765894621.473:803): pid=5237 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:01.557425 kubelet[2975]: E1216 14:17:01.556905 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8c8c88d5b-gxh2p" podUID="5790375d-68f1-4555-984f-974084235d42" Dec 16 14:17:02.036753 sshd[5237]: Connection closed by 139.178.89.65 port 35860 Dec 16 14:17:02.045584 sshd-session[5228]: pam_unix(sshd:session): session closed for user core Dec 16 14:17:02.047000 audit[5228]: USER_END pid=5228 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:02.056912 systemd[1]: sshd@15-10.230.52.194:22-139.178.89.65:35860.service: Deactivated successfully. Dec 16 14:17:02.057923 kernel: audit: type=1106 audit(1765894622.047:804): pid=5228 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:02.062601 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 14:17:02.067376 systemd-logind[1646]: Session 18 logged out. Waiting for processes to exit. Dec 16 14:17:02.051000 audit[5228]: CRED_DISP pid=5228 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:02.071610 systemd-logind[1646]: Removed session 18. Dec 16 14:17:02.076231 kernel: audit: type=1104 audit(1765894622.051:805): pid=5228 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:02.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.52.194:22-139.178.89.65:35860 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:17:05.559715 kubelet[2975]: E1216 14:17:05.559514 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2m9mr" podUID="86b288c9-63f1-4f44-8c9e-eb5a65f83789" Dec 16 14:17:05.565664 kubelet[2975]: E1216 14:17:05.561831 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b4555d8df-7md4r" podUID="32682285-0749-4677-aeaa-b30aca23774e" Dec 16 14:17:06.557566 kubelet[2975]: E1216 14:17:06.557337 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66659c8785-m7865" podUID="6e75d316-f851-4233-b053-cd9dc148b92b" Dec 16 14:17:07.205207 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 14:17:07.205415 kernel: audit: type=1130 audit(1765894627.195:807): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.52.194:22-139.178.89.65:35866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:17:07.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.52.194:22-139.178.89.65:35866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:17:07.196614 systemd[1]: Started sshd@16-10.230.52.194:22-139.178.89.65:35866.service - OpenSSH per-connection server daemon (139.178.89.65:35866). Dec 16 14:17:08.024000 audit[5249]: USER_ACCT pid=5249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:08.031775 sshd[5249]: Accepted publickey for core from 139.178.89.65 port 35866 ssh2: RSA SHA256:HFgv0LODQ11hKhkpz13ZhYSt64haPZ/HUbDL52pAA0k Dec 16 14:17:08.033448 kernel: audit: type=1101 audit(1765894628.024:808): pid=5249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:08.032000 audit[5249]: CRED_ACQ pid=5249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:08.035043 sshd-session[5249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:17:08.039680 kernel: audit: type=1103 audit(1765894628.032:809): pid=5249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:08.040450 kernel: audit: type=1006 audit(1765894628.032:810): pid=5249 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Dec 16 14:17:08.032000 audit[5249]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeeda44ef0 a2=3 a3=0 items=0 ppid=1 pid=5249 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:17:08.044698 kernel: audit: type=1300 audit(1765894628.032:810): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeeda44ef0 a2=3 a3=0 items=0 ppid=1 pid=5249 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:17:08.032000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:17:08.052393 kernel: audit: type=1327 audit(1765894628.032:810): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:17:08.059016 systemd-logind[1646]: New session 19 of user core. Dec 16 14:17:08.064511 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 14:17:08.070000 audit[5249]: USER_START pid=5249 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:08.078272 kernel: audit: type=1105 audit(1765894628.070:811): pid=5249 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:08.078000 audit[5252]: CRED_ACQ pid=5252 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:08.085213 kernel: audit: type=1103 audit(1765894628.078:812): pid=5252 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:08.598855 sshd[5252]: Connection closed by 139.178.89.65 port 35866 Dec 16 14:17:08.600036 sshd-session[5249]: pam_unix(sshd:session): session closed for user core Dec 16 14:17:08.600000 audit[5249]: USER_END pid=5249 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:08.614245 kernel: audit: type=1106 audit(1765894628.600:813): pid=5249 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:08.601000 audit[5249]: CRED_DISP pid=5249 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:08.617960 systemd[1]: sshd@16-10.230.52.194:22-139.178.89.65:35866.service: Deactivated successfully. Dec 16 14:17:08.621885 kernel: audit: type=1104 audit(1765894628.601:814): pid=5249 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:08.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.52.194:22-139.178.89.65:35866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:17:08.626520 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 14:17:08.628998 systemd-logind[1646]: Session 19 logged out. Waiting for processes to exit. Dec 16 14:17:08.633956 systemd-logind[1646]: Removed session 19. Dec 16 14:17:09.554565 kubelet[2975]: E1216 14:17:09.554424 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66659c8785-q7tzf" podUID="8ce9819e-d54a-4fa9-97c3-d8a0be2ecea0" Dec 16 14:17:13.557359 containerd[1683]: time="2025-12-16T14:17:13.557114927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 14:17:13.763165 systemd[1]: Started sshd@17-10.230.52.194:22-139.178.89.65:38714.service - OpenSSH per-connection server daemon (139.178.89.65:38714). Dec 16 14:17:13.784651 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 14:17:13.785443 kernel: audit: type=1130 audit(1765894633.762:816): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.52.194:22-139.178.89.65:38714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:17:13.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.52.194:22-139.178.89.65:38714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:17:13.885676 containerd[1683]: time="2025-12-16T14:17:13.884576884Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:17:13.886195 containerd[1683]: time="2025-12-16T14:17:13.886101725Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 14:17:13.887197 containerd[1683]: time="2025-12-16T14:17:13.886115542Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 14:17:13.888590 kubelet[2975]: E1216 14:17:13.887691 2975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 14:17:13.888590 kubelet[2975]: E1216 14:17:13.887836 2975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 14:17:13.891835 kubelet[2975]: E1216 14:17:13.889221 2975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tvzdp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tfzg7_calico-system(1248d2d9-77a6-4a9d-9b93-4af871a2edbf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 14:17:13.897752 containerd[1683]: time="2025-12-16T14:17:13.897388479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 14:17:14.229135 containerd[1683]: time="2025-12-16T14:17:14.228959926Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:17:14.230706 containerd[1683]: time="2025-12-16T14:17:14.230581098Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 14:17:14.230802 containerd[1683]: time="2025-12-16T14:17:14.230677757Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 14:17:14.231101 kubelet[2975]: E1216 14:17:14.230952 2975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 14:17:14.231225 kubelet[2975]: E1216 14:17:14.231125 2975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 14:17:14.232253 kubelet[2975]: E1216 14:17:14.231680 2975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tvzdp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tfzg7_calico-system(1248d2d9-77a6-4a9d-9b93-4af871a2edbf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 14:17:14.233468 kubelet[2975]: E1216 14:17:14.233373 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tfzg7" podUID="1248d2d9-77a6-4a9d-9b93-4af871a2edbf" Dec 16 14:17:14.602000 audit[5273]: USER_ACCT pid=5273 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:14.608460 sshd[5273]: Accepted publickey for core from 139.178.89.65 port 38714 ssh2: RSA SHA256:HFgv0LODQ11hKhkpz13ZhYSt64haPZ/HUbDL52pAA0k Dec 16 14:17:14.613633 kernel: audit: type=1101 audit(1765894634.602:817): pid=5273 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:14.617727 sshd-session[5273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:17:14.615000 audit[5273]: CRED_ACQ pid=5273 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:14.628210 kernel: audit: type=1103 audit(1765894634.615:818): pid=5273 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:14.638139 kernel: audit: type=1006 audit(1765894634.615:819): pid=5273 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Dec 16 14:17:14.640023 systemd-logind[1646]: New session 20 of user core. Dec 16 14:17:14.615000 audit[5273]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd59f6d30 a2=3 a3=0 items=0 ppid=1 pid=5273 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:17:14.646217 kernel: audit: type=1300 audit(1765894634.615:819): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd59f6d30 a2=3 a3=0 items=0 ppid=1 pid=5273 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:17:14.648844 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 14:17:14.615000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:17:14.655290 kernel: audit: type=1327 audit(1765894634.615:819): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:17:14.656000 audit[5273]: USER_START pid=5273 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:14.664210 kernel: audit: type=1105 audit(1765894634.656:820): pid=5273 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:14.663000 audit[5276]: CRED_ACQ pid=5276 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:14.674235 kernel: audit: type=1103 audit(1765894634.663:821): pid=5276 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:15.238207 sshd[5276]: Connection closed by 139.178.89.65 port 38714 Dec 16 14:17:15.239049 sshd-session[5273]: pam_unix(sshd:session): session closed for user core Dec 16 14:17:15.242000 audit[5273]: USER_END pid=5273 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:15.256034 kernel: audit: type=1106 audit(1765894635.242:822): pid=5273 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:15.256358 systemd[1]: sshd@17-10.230.52.194:22-139.178.89.65:38714.service: Deactivated successfully. Dec 16 14:17:15.242000 audit[5273]: CRED_DISP pid=5273 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:15.262897 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 14:17:15.263212 kernel: audit: type=1104 audit(1765894635.242:823): pid=5273 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:15.268317 systemd-logind[1646]: Session 20 logged out. Waiting for processes to exit. Dec 16 14:17:15.270232 systemd-logind[1646]: Removed session 20. Dec 16 14:17:15.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.52.194:22-139.178.89.65:38714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:17:15.403589 systemd[1]: Started sshd@18-10.230.52.194:22-139.178.89.65:38716.service - OpenSSH per-connection server daemon (139.178.89.65:38716). Dec 16 14:17:15.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.52.194:22-139.178.89.65:38716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:17:16.238000 audit[5288]: USER_ACCT pid=5288 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:16.241235 sshd[5288]: Accepted publickey for core from 139.178.89.65 port 38716 ssh2: RSA SHA256:HFgv0LODQ11hKhkpz13ZhYSt64haPZ/HUbDL52pAA0k Dec 16 14:17:16.240000 audit[5288]: CRED_ACQ pid=5288 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:16.241000 audit[5288]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6bc87c30 a2=3 a3=0 items=0 ppid=1 pid=5288 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:17:16.241000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:17:16.244134 sshd-session[5288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:17:16.257338 systemd-logind[1646]: New session 21 of user core. Dec 16 14:17:16.264468 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 14:17:16.271000 audit[5288]: USER_START pid=5288 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:16.275000 audit[5291]: CRED_ACQ pid=5291 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:16.558317 containerd[1683]: time="2025-12-16T14:17:16.558039456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 14:17:16.879504 containerd[1683]: time="2025-12-16T14:17:16.878539535Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:17:16.881294 containerd[1683]: time="2025-12-16T14:17:16.881120112Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 14:17:16.881576 containerd[1683]: time="2025-12-16T14:17:16.881507540Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 14:17:16.883292 kubelet[2975]: E1216 14:17:16.882125 2975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 14:17:16.884861 kubelet[2975]: E1216 14:17:16.884044 2975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 14:17:16.884861 kubelet[2975]: E1216 14:17:16.884489 2975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hwp5p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8c8c88d5b-gxh2p_calico-system(5790375d-68f1-4555-984f-974084235d42): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 14:17:16.885228 containerd[1683]: time="2025-12-16T14:17:16.884585146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 14:17:16.886240 kubelet[2975]: E1216 14:17:16.885702 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8c8c88d5b-gxh2p" podUID="5790375d-68f1-4555-984f-974084235d42" Dec 16 14:17:17.210309 containerd[1683]: time="2025-12-16T14:17:17.209628133Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:17:17.213891 containerd[1683]: time="2025-12-16T14:17:17.213386459Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 14:17:17.213891 containerd[1683]: time="2025-12-16T14:17:17.213585877Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 14:17:17.215441 kubelet[2975]: E1216 14:17:17.215226 2975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 14:17:17.216886 kubelet[2975]: E1216 14:17:17.215560 2975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 14:17:17.217986 kubelet[2975]: E1216 14:17:17.217825 2975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ec4a891a20f447ee9adfa1d6478964a6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wbf5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7b4555d8df-7md4r_calico-system(32682285-0749-4677-aeaa-b30aca23774e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 14:17:17.222390 containerd[1683]: time="2025-12-16T14:17:17.221644702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 14:17:17.368212 sshd[5291]: Connection closed by 139.178.89.65 port 38716 Dec 16 14:17:17.370852 sshd-session[5288]: pam_unix(sshd:session): session closed for user core Dec 16 14:17:17.376000 audit[5288]: USER_END pid=5288 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:17.377000 audit[5288]: CRED_DISP pid=5288 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:17.383642 systemd[1]: sshd@18-10.230.52.194:22-139.178.89.65:38716.service: Deactivated successfully. Dec 16 14:17:17.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.52.194:22-139.178.89.65:38716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:17:17.387760 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 14:17:17.391006 systemd-logind[1646]: Session 21 logged out. Waiting for processes to exit. Dec 16 14:17:17.394685 systemd-logind[1646]: Removed session 21. Dec 16 14:17:17.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.52.194:22-139.178.89.65:38726 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:17:17.526725 systemd[1]: Started sshd@19-10.230.52.194:22-139.178.89.65:38726.service - OpenSSH per-connection server daemon (139.178.89.65:38726). Dec 16 14:17:17.548335 containerd[1683]: time="2025-12-16T14:17:17.548267308Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:17:17.550638 containerd[1683]: time="2025-12-16T14:17:17.550594118Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 14:17:17.551198 containerd[1683]: time="2025-12-16T14:17:17.550704162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 14:17:17.551297 kubelet[2975]: E1216 14:17:17.551001 2975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 14:17:17.551297 kubelet[2975]: E1216 14:17:17.551089 2975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 14:17:17.553254 kubelet[2975]: E1216 14:17:17.552415 2975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wbf5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7b4555d8df-7md4r_calico-system(32682285-0749-4677-aeaa-b30aca23774e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 14:17:17.555979 kubelet[2975]: E1216 14:17:17.554273 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b4555d8df-7md4r" podUID="32682285-0749-4677-aeaa-b30aca23774e" Dec 16 14:17:18.357000 audit[5301]: USER_ACCT pid=5301 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:18.360220 sshd[5301]: Accepted publickey for core from 139.178.89.65 port 38726 ssh2: RSA SHA256:HFgv0LODQ11hKhkpz13ZhYSt64haPZ/HUbDL52pAA0k Dec 16 14:17:18.360000 audit[5301]: CRED_ACQ pid=5301 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:18.361000 audit[5301]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb461ceb0 a2=3 a3=0 items=0 ppid=1 pid=5301 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:17:18.361000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:17:18.363300 sshd-session[5301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:17:18.374428 systemd-logind[1646]: New session 22 of user core. Dec 16 14:17:18.380414 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 14:17:18.384000 audit[5301]: USER_START pid=5301 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:18.389000 audit[5331]: CRED_ACQ pid=5331 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:19.781605 kernel: kauditd_printk_skb: 20 callbacks suppressed Dec 16 14:17:19.781934 kernel: audit: type=1325 audit(1765894639.772:840): table=filter:133 family=2 entries=26 op=nft_register_rule pid=5341 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:17:19.772000 audit[5341]: NETFILTER_CFG table=filter:133 family=2 entries=26 op=nft_register_rule pid=5341 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:17:19.772000 audit[5341]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff1b785620 a2=0 a3=7fff1b78560c items=0 ppid=3086 pid=5341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:17:19.794790 kernel: audit: type=1300 audit(1765894639.772:840): arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff1b785620 a2=0 a3=7fff1b78560c items=0 ppid=3086 pid=5341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:17:19.794885 kernel: audit: type=1327 audit(1765894639.772:840): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:17:19.772000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:17:19.795000 audit[5341]: NETFILTER_CFG table=nat:134 family=2 entries=20 op=nft_register_rule pid=5341 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:17:19.800248 kernel: audit: type=1325 audit(1765894639.795:841): table=nat:134 family=2 entries=20 op=nft_register_rule pid=5341 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:17:19.795000 audit[5341]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff1b785620 a2=0 a3=0 items=0 ppid=3086 pid=5341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:17:19.795000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:17:19.812531 kernel: audit: type=1300 audit(1765894639.795:841): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff1b785620 a2=0 a3=0 items=0 ppid=3086 pid=5341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:17:19.812654 kernel: audit: type=1327 audit(1765894639.795:841): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:17:19.841000 audit[5343]: NETFILTER_CFG table=filter:135 family=2 entries=38 op=nft_register_rule pid=5343 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:17:19.841000 audit[5343]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc746c66c0 a2=0 a3=7ffc746c66ac items=0 ppid=3086 pid=5343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:17:19.848337 kernel: audit: type=1325 audit(1765894639.841:842): table=filter:135 family=2 entries=38 op=nft_register_rule pid=5343 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:17:19.848435 kernel: audit: type=1300 audit(1765894639.841:842): arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc746c66c0 a2=0 a3=7ffc746c66ac items=0 ppid=3086 pid=5343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:17:19.841000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:17:19.856209 kernel: audit: type=1327 audit(1765894639.841:842): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:17:19.857000 audit[5343]: NETFILTER_CFG table=nat:136 family=2 entries=20 op=nft_register_rule pid=5343 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:17:19.862298 kernel: audit: type=1325 audit(1765894639.857:843): table=nat:136 family=2 entries=20 op=nft_register_rule pid=5343 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:17:19.857000 audit[5343]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc746c66c0 a2=0 a3=0 items=0 ppid=3086 pid=5343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:17:19.857000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:17:19.892678 sshd[5331]: Connection closed by 139.178.89.65 port 38726 Dec 16 14:17:19.893516 sshd-session[5301]: pam_unix(sshd:session): session closed for user core Dec 16 14:17:19.896000 audit[5301]: USER_END pid=5301 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:19.897000 audit[5301]: CRED_DISP pid=5301 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:19.903313 systemd[1]: sshd@19-10.230.52.194:22-139.178.89.65:38726.service: Deactivated successfully. Dec 16 14:17:19.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.52.194:22-139.178.89.65:38726 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:17:19.909412 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 14:17:19.914109 systemd-logind[1646]: Session 22 logged out. Waiting for processes to exit. Dec 16 14:17:19.918993 systemd-logind[1646]: Removed session 22. Dec 16 14:17:20.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.52.194:22-139.178.89.65:38728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:17:20.058762 systemd[1]: Started sshd@20-10.230.52.194:22-139.178.89.65:38728.service - OpenSSH per-connection server daemon (139.178.89.65:38728). Dec 16 14:17:20.556684 containerd[1683]: time="2025-12-16T14:17:20.556556116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 14:17:20.881000 audit[5348]: USER_ACCT pid=5348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:20.884249 sshd[5348]: Accepted publickey for core from 139.178.89.65 port 38728 ssh2: RSA SHA256:HFgv0LODQ11hKhkpz13ZhYSt64haPZ/HUbDL52pAA0k Dec 16 14:17:20.883000 audit[5348]: CRED_ACQ pid=5348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:20.883000 audit[5348]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc5d7880f0 a2=3 a3=0 items=0 ppid=1 pid=5348 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:17:20.883000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:17:20.885903 sshd-session[5348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:17:20.887612 containerd[1683]: time="2025-12-16T14:17:20.887546960Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:17:20.889703 containerd[1683]: time="2025-12-16T14:17:20.889656103Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 14:17:20.889827 containerd[1683]: time="2025-12-16T14:17:20.889798466Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 14:17:20.890617 kubelet[2975]: E1216 14:17:20.890171 2975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 14:17:20.894199 kubelet[2975]: E1216 14:17:20.891188 2975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 14:17:20.894199 kubelet[2975]: E1216 14:17:20.892815 2975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tb8l9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-2m9mr_calico-system(86b288c9-63f1-4f44-8c9e-eb5a65f83789): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 14:17:20.894518 kubelet[2975]: E1216 14:17:20.894406 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2m9mr" podUID="86b288c9-63f1-4f44-8c9e-eb5a65f83789" Dec 16 14:17:20.895210 containerd[1683]: time="2025-12-16T14:17:20.893163263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 14:17:20.902913 systemd-logind[1646]: New session 23 of user core. Dec 16 14:17:20.907505 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 14:17:20.915000 audit[5348]: USER_START pid=5348 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:20.919000 audit[5351]: CRED_ACQ pid=5351 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:21.209699 containerd[1683]: time="2025-12-16T14:17:21.209387524Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:17:21.211576 containerd[1683]: time="2025-12-16T14:17:21.211243131Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 14:17:21.211576 containerd[1683]: time="2025-12-16T14:17:21.211234835Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 14:17:21.211842 kubelet[2975]: E1216 14:17:21.211772 2975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 14:17:21.211945 kubelet[2975]: E1216 14:17:21.211865 2975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 14:17:21.212208 kubelet[2975]: E1216 14:17:21.212111 2975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dkmx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66659c8785-m7865_calico-apiserver(6e75d316-f851-4233-b053-cd9dc148b92b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 14:17:21.214200 kubelet[2975]: E1216 14:17:21.213608 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66659c8785-m7865" podUID="6e75d316-f851-4233-b053-cd9dc148b92b" Dec 16 14:17:21.559368 containerd[1683]: time="2025-12-16T14:17:21.559095262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 14:17:21.780407 sshd[5351]: Connection closed by 139.178.89.65 port 38728 Dec 16 14:17:21.780232 sshd-session[5348]: pam_unix(sshd:session): session closed for user core Dec 16 14:17:21.787000 audit[5348]: USER_END pid=5348 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:21.787000 audit[5348]: CRED_DISP pid=5348 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:21.791799 systemd[1]: sshd@20-10.230.52.194:22-139.178.89.65:38728.service: Deactivated successfully. Dec 16 14:17:21.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.52.194:22-139.178.89.65:38728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:17:21.798294 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 14:17:21.800482 systemd-logind[1646]: Session 23 logged out. Waiting for processes to exit. Dec 16 14:17:21.806785 systemd-logind[1646]: Removed session 23. Dec 16 14:17:21.939694 containerd[1683]: time="2025-12-16T14:17:21.939245651Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:17:21.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.52.194:22-139.178.89.65:41952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:17:21.940616 systemd[1]: Started sshd@21-10.230.52.194:22-139.178.89.65:41952.service - OpenSSH per-connection server daemon (139.178.89.65:41952). Dec 16 14:17:21.948196 containerd[1683]: time="2025-12-16T14:17:21.947163156Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 14:17:21.948619 containerd[1683]: time="2025-12-16T14:17:21.948436756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 14:17:21.948719 kubelet[2975]: E1216 14:17:21.948638 2975 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 14:17:21.949403 kubelet[2975]: E1216 14:17:21.948706 2975 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 14:17:21.949403 kubelet[2975]: E1216 14:17:21.948947 2975 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fbb22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66659c8785-q7tzf_calico-apiserver(8ce9819e-d54a-4fa9-97c3-d8a0be2ecea0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 14:17:21.950458 kubelet[2975]: E1216 14:17:21.950347 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66659c8785-q7tzf" podUID="8ce9819e-d54a-4fa9-97c3-d8a0be2ecea0" Dec 16 14:17:22.792000 audit[5361]: USER_ACCT pid=5361 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:22.796301 sshd[5361]: Accepted publickey for core from 139.178.89.65 port 41952 ssh2: RSA SHA256:HFgv0LODQ11hKhkpz13ZhYSt64haPZ/HUbDL52pAA0k Dec 16 14:17:22.796000 audit[5361]: CRED_ACQ pid=5361 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:22.796000 audit[5361]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc37827350 a2=3 a3=0 items=0 ppid=1 pid=5361 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:17:22.796000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:17:22.800421 sshd-session[5361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:17:22.822822 systemd-logind[1646]: New session 24 of user core. Dec 16 14:17:22.830552 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 14:17:22.835000 audit[5361]: USER_START pid=5361 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:22.839000 audit[5364]: CRED_ACQ pid=5364 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:23.365727 sshd[5364]: Connection closed by 139.178.89.65 port 41952 Dec 16 14:17:23.366234 sshd-session[5361]: pam_unix(sshd:session): session closed for user core Dec 16 14:17:23.368000 audit[5361]: USER_END pid=5361 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:23.368000 audit[5361]: CRED_DISP pid=5361 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:23.373509 systemd[1]: sshd@21-10.230.52.194:22-139.178.89.65:41952.service: Deactivated successfully. Dec 16 14:17:23.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.52.194:22-139.178.89.65:41952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:17:23.377552 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 14:17:23.379514 systemd-logind[1646]: Session 24 logged out. Waiting for processes to exit. Dec 16 14:17:23.381969 systemd-logind[1646]: Removed session 24. Dec 16 14:17:26.556210 kubelet[2975]: E1216 14:17:26.556114 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tfzg7" podUID="1248d2d9-77a6-4a9d-9b93-4af871a2edbf" Dec 16 14:17:28.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.52.194:22-139.178.89.65:41954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:17:28.539659 kernel: kauditd_printk_skb: 27 callbacks suppressed Dec 16 14:17:28.541578 kernel: audit: type=1130 audit(1765894648.528:865): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.52.194:22-139.178.89.65:41954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:17:28.529683 systemd[1]: Started sshd@22-10.230.52.194:22-139.178.89.65:41954.service - OpenSSH per-connection server daemon (139.178.89.65:41954). Dec 16 14:17:28.555457 kubelet[2975]: E1216 14:17:28.555386 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8c8c88d5b-gxh2p" podUID="5790375d-68f1-4555-984f-974084235d42" Dec 16 14:17:29.339000 audit[5396]: USER_ACCT pid=5396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:29.348278 kernel: audit: type=1101 audit(1765894649.339:866): pid=5396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:29.349108 sshd[5396]: Accepted publickey for core from 139.178.89.65 port 41954 ssh2: RSA SHA256:HFgv0LODQ11hKhkpz13ZhYSt64haPZ/HUbDL52pAA0k Dec 16 14:17:29.350684 sshd-session[5396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:17:29.348000 audit[5396]: CRED_ACQ pid=5396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:29.356578 kernel: audit: type=1103 audit(1765894649.348:867): pid=5396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:29.356704 kernel: audit: type=1006 audit(1765894649.348:868): pid=5396 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 16 14:17:29.348000 audit[5396]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe506e4d00 a2=3 a3=0 items=0 ppid=1 pid=5396 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:17:29.348000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:17:29.366752 kernel: audit: type=1300 audit(1765894649.348:868): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe506e4d00 a2=3 a3=0 items=0 ppid=1 pid=5396 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:17:29.366857 kernel: audit: type=1327 audit(1765894649.348:868): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:17:29.373551 systemd-logind[1646]: New session 25 of user core. Dec 16 14:17:29.385508 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 14:17:29.390000 audit[5396]: USER_START pid=5396 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:29.398241 kernel: audit: type=1105 audit(1765894649.390:869): pid=5396 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:29.399000 audit[5401]: CRED_ACQ pid=5401 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:29.407258 kernel: audit: type=1103 audit(1765894649.399:870): pid=5401 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:29.546000 audit[5403]: NETFILTER_CFG table=filter:137 family=2 entries=26 op=nft_register_rule pid=5403 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:17:29.552208 kernel: audit: type=1325 audit(1765894649.546:871): table=filter:137 family=2 entries=26 op=nft_register_rule pid=5403 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:17:29.546000 audit[5403]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffeda7c0b0 a2=0 a3=7fffeda7c09c items=0 ppid=3086 pid=5403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:17:29.559251 kernel: audit: type=1300 audit(1765894649.546:871): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffeda7c0b0 a2=0 a3=7fffeda7c09c items=0 ppid=3086 pid=5403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:17:29.546000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:17:29.564000 audit[5403]: NETFILTER_CFG table=nat:138 family=2 entries=104 op=nft_register_chain pid=5403 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:17:29.564000 audit[5403]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fffeda7c0b0 a2=0 a3=7fffeda7c09c items=0 ppid=3086 pid=5403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:17:29.564000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:17:29.958569 sshd[5401]: Connection closed by 139.178.89.65 port 41954 Dec 16 14:17:29.960092 sshd-session[5396]: pam_unix(sshd:session): session closed for user core Dec 16 14:17:29.964000 audit[5396]: USER_END pid=5396 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:29.964000 audit[5396]: CRED_DISP pid=5396 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:29.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.52.194:22-139.178.89.65:41954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:17:29.969804 systemd[1]: sshd@22-10.230.52.194:22-139.178.89.65:41954.service: Deactivated successfully. Dec 16 14:17:29.976550 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 14:17:29.979888 systemd-logind[1646]: Session 25 logged out. Waiting for processes to exit. Dec 16 14:17:29.983070 systemd-logind[1646]: Removed session 25. Dec 16 14:17:32.557541 kubelet[2975]: E1216 14:17:32.557471 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b4555d8df-7md4r" podUID="32682285-0749-4677-aeaa-b30aca23774e" Dec 16 14:17:34.557093 kubelet[2975]: E1216 14:17:34.557025 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2m9mr" podUID="86b288c9-63f1-4f44-8c9e-eb5a65f83789" Dec 16 14:17:35.128345 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 14:17:35.128662 kernel: audit: type=1130 audit(1765894655.125:876): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.52.194:22-139.178.89.65:58478 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:17:35.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.52.194:22-139.178.89.65:58478 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:17:35.126094 systemd[1]: Started sshd@23-10.230.52.194:22-139.178.89.65:58478.service - OpenSSH per-connection server daemon (139.178.89.65:58478). Dec 16 14:17:35.615930 kubelet[2975]: E1216 14:17:35.615756 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66659c8785-m7865" podUID="6e75d316-f851-4233-b053-cd9dc148b92b" Dec 16 14:17:35.943000 audit[5415]: USER_ACCT pid=5415 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:35.954211 kernel: audit: type=1101 audit(1765894655.943:877): pid=5415 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:35.954536 sshd[5415]: Accepted publickey for core from 139.178.89.65 port 58478 ssh2: RSA SHA256:HFgv0LODQ11hKhkpz13ZhYSt64haPZ/HUbDL52pAA0k Dec 16 14:17:35.958949 sshd-session[5415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:17:35.971221 kernel: audit: type=1103 audit(1765894655.956:878): pid=5415 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:35.956000 audit[5415]: CRED_ACQ pid=5415 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:35.977668 systemd-logind[1646]: New session 26 of user core. Dec 16 14:17:35.981338 kernel: audit: type=1006 audit(1765894655.956:879): pid=5415 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 16 14:17:35.981416 kernel: audit: type=1300 audit(1765894655.956:879): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff22593010 a2=3 a3=0 items=0 ppid=1 pid=5415 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:17:35.956000 audit[5415]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff22593010 a2=3 a3=0 items=0 ppid=1 pid=5415 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:17:35.986436 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 14:17:35.956000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:17:35.990222 kernel: audit: type=1327 audit(1765894655.956:879): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:17:35.994000 audit[5415]: USER_START pid=5415 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:36.002225 kernel: audit: type=1105 audit(1765894655.994:880): pid=5415 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:36.001000 audit[5418]: CRED_ACQ pid=5418 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:36.008219 kernel: audit: type=1103 audit(1765894656.001:881): pid=5418 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:37.000002 sshd[5418]: Connection closed by 139.178.89.65 port 58478 Dec 16 14:17:36.999491 sshd-session[5415]: pam_unix(sshd:session): session closed for user core Dec 16 14:17:37.002000 audit[5415]: USER_END pid=5415 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:37.039081 kernel: audit: type=1106 audit(1765894657.002:882): pid=5415 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:37.039872 kernel: audit: type=1104 audit(1765894657.011:883): pid=5415 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:37.011000 audit[5415]: CRED_DISP pid=5415 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 14:17:37.040816 systemd[1]: sshd@23-10.230.52.194:22-139.178.89.65:58478.service: Deactivated successfully. Dec 16 14:17:37.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.52.194:22-139.178.89.65:58478 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:17:37.048470 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 14:17:37.053816 systemd-logind[1646]: Session 26 logged out. Waiting for processes to exit. Dec 16 14:17:37.057390 systemd-logind[1646]: Removed session 26. Dec 16 14:17:37.556719 kubelet[2975]: E1216 14:17:37.555226 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66659c8785-q7tzf" podUID="8ce9819e-d54a-4fa9-97c3-d8a0be2ecea0" Dec 16 14:17:40.556657 kubelet[2975]: E1216 14:17:40.556502 2975 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tfzg7" podUID="1248d2d9-77a6-4a9d-9b93-4af871a2edbf"