Dec 16 12:56:06.350336 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Dec 16 00:18:19 -00 2025 Dec 16 12:56:06.350386 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 12:56:06.350401 kernel: BIOS-provided physical RAM map: Dec 16 12:56:06.350412 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 16 12:56:06.350428 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 16 12:56:06.350439 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 16 12:56:06.350452 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Dec 16 12:56:06.350473 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Dec 16 12:56:06.350486 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 16 12:56:06.350497 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 16 12:56:06.350509 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 12:56:06.350520 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 16 12:56:06.350531 kernel: NX (Execute Disable) protection: active Dec 16 12:56:06.350548 kernel: APIC: Static calls initialized Dec 16 12:56:06.350561 kernel: SMBIOS 2.8 present. Dec 16 12:56:06.350574 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Dec 16 12:56:06.350587 kernel: DMI: Memory slots populated: 1/1 Dec 16 12:56:06.350603 kernel: Hypervisor detected: KVM Dec 16 12:56:06.350615 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 16 12:56:06.350628 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 12:56:06.350640 kernel: kvm-clock: using sched offset of 5249549664 cycles Dec 16 12:56:06.350653 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 12:56:06.350666 kernel: tsc: Detected 2499.998 MHz processor Dec 16 12:56:06.350679 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 12:56:06.350692 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 12:56:06.350709 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 16 12:56:06.350722 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 16 12:56:06.350734 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 12:56:06.350747 kernel: Using GB pages for direct mapping Dec 16 12:56:06.350759 kernel: ACPI: Early table checksum verification disabled Dec 16 12:56:06.350771 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Dec 16 12:56:06.350784 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:56:06.350796 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:56:06.350813 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:56:06.350826 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Dec 16 12:56:06.350852 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:56:06.350865 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:56:06.350878 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:56:06.350890 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:56:06.350903 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Dec 16 12:56:06.350925 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Dec 16 12:56:06.350938 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Dec 16 12:56:06.350952 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Dec 16 12:56:06.350965 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Dec 16 12:56:06.350982 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Dec 16 12:56:06.350995 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Dec 16 12:56:06.351008 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 16 12:56:06.351021 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 16 12:56:06.351034 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Dec 16 12:56:06.351047 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Dec 16 12:56:06.351060 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Dec 16 12:56:06.351077 kernel: Zone ranges: Dec 16 12:56:06.351090 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 12:56:06.351103 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Dec 16 12:56:06.351116 kernel: Normal empty Dec 16 12:56:06.351128 kernel: Device empty Dec 16 12:56:06.351142 kernel: Movable zone start for each node Dec 16 12:56:06.351154 kernel: Early memory node ranges Dec 16 12:56:06.351167 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 16 12:56:06.351185 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Dec 16 12:56:06.351198 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Dec 16 12:56:06.351211 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 12:56:06.351242 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 16 12:56:06.351257 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Dec 16 12:56:06.351270 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 16 12:56:06.351289 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 12:56:06.351310 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 12:56:06.351323 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 16 12:56:06.351336 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 12:56:06.351349 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 12:56:06.351362 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 12:56:06.351375 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 12:56:06.351388 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 12:56:06.351405 kernel: TSC deadline timer available Dec 16 12:56:06.351419 kernel: CPU topo: Max. logical packages: 16 Dec 16 12:56:06.351431 kernel: CPU topo: Max. logical dies: 16 Dec 16 12:56:06.351444 kernel: CPU topo: Max. dies per package: 1 Dec 16 12:56:06.351456 kernel: CPU topo: Max. threads per core: 1 Dec 16 12:56:06.351469 kernel: CPU topo: Num. cores per package: 1 Dec 16 12:56:06.351482 kernel: CPU topo: Num. threads per package: 1 Dec 16 12:56:06.351495 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Dec 16 12:56:06.351512 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 12:56:06.351525 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 16 12:56:06.351538 kernel: Booting paravirtualized kernel on KVM Dec 16 12:56:06.351552 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 12:56:06.351565 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Dec 16 12:56:06.351578 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Dec 16 12:56:06.351591 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Dec 16 12:56:06.351608 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 16 12:56:06.351621 kernel: kvm-guest: PV spinlocks enabled Dec 16 12:56:06.351634 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 12:56:06.351649 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 12:56:06.351662 kernel: random: crng init done Dec 16 12:56:06.351675 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 12:56:06.351688 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 12:56:06.351706 kernel: Fallback order for Node 0: 0 Dec 16 12:56:06.351719 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Dec 16 12:56:06.351732 kernel: Policy zone: DMA32 Dec 16 12:56:06.351745 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 12:56:06.351758 kernel: software IO TLB: area num 16. Dec 16 12:56:06.351771 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 16 12:56:06.351784 kernel: Kernel/User page tables isolation: enabled Dec 16 12:56:06.351801 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 12:56:06.351814 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 12:56:06.351827 kernel: Dynamic Preempt: voluntary Dec 16 12:56:06.351916 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 12:56:06.351931 kernel: rcu: RCU event tracing is enabled. Dec 16 12:56:06.351944 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 16 12:56:06.351957 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 12:56:06.351970 kernel: Rude variant of Tasks RCU enabled. Dec 16 12:56:06.351990 kernel: Tracing variant of Tasks RCU enabled. Dec 16 12:56:06.352003 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 12:56:06.352016 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 16 12:56:06.352030 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 16 12:56:06.352043 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 16 12:56:06.352056 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 16 12:56:06.352069 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Dec 16 12:56:06.352087 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 12:56:06.352110 kernel: Console: colour VGA+ 80x25 Dec 16 12:56:06.352128 kernel: printk: legacy console [tty0] enabled Dec 16 12:56:06.352142 kernel: printk: legacy console [ttyS0] enabled Dec 16 12:56:06.352162 kernel: ACPI: Core revision 20240827 Dec 16 12:56:06.352177 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 12:56:06.352191 kernel: x2apic enabled Dec 16 12:56:06.352205 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 12:56:06.353963 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 16 12:56:06.353996 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Dec 16 12:56:06.354011 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 12:56:06.354026 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 16 12:56:06.354039 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 16 12:56:06.354053 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 12:56:06.354071 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 12:56:06.354084 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 12:56:06.354098 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 16 12:56:06.354111 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 12:56:06.354124 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 12:56:06.354138 kernel: MDS: Mitigation: Clear CPU buffers Dec 16 12:56:06.354151 kernel: MMIO Stale Data: Unknown: No mitigations Dec 16 12:56:06.354164 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 16 12:56:06.354177 kernel: active return thunk: its_return_thunk Dec 16 12:56:06.354190 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 16 12:56:06.354208 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 12:56:06.354241 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 12:56:06.354257 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 12:56:06.354271 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 12:56:06.354284 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 16 12:56:06.354297 kernel: Freeing SMP alternatives memory: 32K Dec 16 12:56:06.354310 kernel: pid_max: default: 32768 minimum: 301 Dec 16 12:56:06.354324 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 12:56:06.354337 kernel: landlock: Up and running. Dec 16 12:56:06.354350 kernel: SELinux: Initializing. Dec 16 12:56:06.354370 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 12:56:06.354384 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 12:56:06.354397 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Dec 16 12:56:06.354411 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Dec 16 12:56:06.354424 kernel: signal: max sigframe size: 1776 Dec 16 12:56:06.354438 kernel: rcu: Hierarchical SRCU implementation. Dec 16 12:56:06.354453 kernel: rcu: Max phase no-delay instances is 400. Dec 16 12:56:06.354466 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Dec 16 12:56:06.354484 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 16 12:56:06.354498 kernel: smp: Bringing up secondary CPUs ... Dec 16 12:56:06.354512 kernel: smpboot: x86: Booting SMP configuration: Dec 16 12:56:06.354525 kernel: .... node #0, CPUs: #1 Dec 16 12:56:06.354539 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 12:56:06.354552 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Dec 16 12:56:06.354566 kernel: Memory: 1912064K/2096616K available (14336K kernel code, 2444K rwdata, 31636K rodata, 15556K init, 2484K bss, 178536K reserved, 0K cma-reserved) Dec 16 12:56:06.354585 kernel: devtmpfs: initialized Dec 16 12:56:06.354598 kernel: x86/mm: Memory block size: 128MB Dec 16 12:56:06.354612 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 12:56:06.354626 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 16 12:56:06.354640 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 12:56:06.354653 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 12:56:06.354666 kernel: audit: initializing netlink subsys (disabled) Dec 16 12:56:06.354684 kernel: audit: type=2000 audit(1765889762.689:1): state=initialized audit_enabled=0 res=1 Dec 16 12:56:06.354698 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 12:56:06.354712 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 12:56:06.354725 kernel: cpuidle: using governor menu Dec 16 12:56:06.354738 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 12:56:06.354752 kernel: dca service started, version 1.12.1 Dec 16 12:56:06.354774 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Dec 16 12:56:06.354789 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 16 12:56:06.354808 kernel: PCI: Using configuration type 1 for base access Dec 16 12:56:06.354822 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 12:56:06.354848 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 12:56:06.354863 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 12:56:06.354877 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 12:56:06.354891 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 12:56:06.354904 kernel: ACPI: Added _OSI(Module Device) Dec 16 12:56:06.354924 kernel: ACPI: Added _OSI(Processor Device) Dec 16 12:56:06.354937 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 12:56:06.354951 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 12:56:06.354964 kernel: ACPI: Interpreter enabled Dec 16 12:56:06.354978 kernel: ACPI: PM: (supports S0 S5) Dec 16 12:56:06.354991 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 12:56:06.355005 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 12:56:06.355023 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 12:56:06.355037 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 16 12:56:06.355050 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 12:56:06.355430 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 12:56:06.355672 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 16 12:56:06.355920 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 16 12:56:06.355949 kernel: PCI host bridge to bus 0000:00 Dec 16 12:56:06.356189 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 12:56:06.356910 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 12:56:06.357127 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 12:56:06.360561 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 16 12:56:06.360824 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 16 12:56:06.361061 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Dec 16 12:56:06.361302 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 12:56:06.361594 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 16 12:56:06.361870 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Dec 16 12:56:06.362107 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Dec 16 12:56:06.364413 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Dec 16 12:56:06.364672 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Dec 16 12:56:06.364927 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 12:56:06.365254 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:56:06.365495 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Dec 16 12:56:06.365739 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 16 12:56:06.365983 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 16 12:56:06.366254 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 12:56:06.366502 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:56:06.366729 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Dec 16 12:56:06.366967 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 16 12:56:06.367907 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 16 12:56:06.368139 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 12:56:06.368423 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:56:06.368648 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Dec 16 12:56:06.368891 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 16 12:56:06.369116 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 16 12:56:06.369380 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 12:56:06.369633 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:56:06.369881 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Dec 16 12:56:06.370115 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 16 12:56:06.370354 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 16 12:56:06.370587 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 12:56:06.370855 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:56:06.371090 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Dec 16 12:56:06.371358 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 16 12:56:06.371583 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 16 12:56:06.371817 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 12:56:06.372070 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:56:06.372334 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Dec 16 12:56:06.372567 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 16 12:56:06.372814 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 16 12:56:06.373055 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 12:56:06.373326 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:56:06.373551 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Dec 16 12:56:06.373793 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 16 12:56:06.374038 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 16 12:56:06.374293 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 12:56:06.374541 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:56:06.374765 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Dec 16 12:56:06.375020 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 16 12:56:06.375279 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 16 12:56:06.375511 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 12:56:06.375761 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 16 12:56:06.376009 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Dec 16 12:56:06.376261 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Dec 16 12:56:06.377027 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Dec 16 12:56:06.377498 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Dec 16 12:56:06.377751 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 16 12:56:06.377997 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Dec 16 12:56:06.378262 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Dec 16 12:56:06.378527 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Dec 16 12:56:06.378789 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 16 12:56:06.379031 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 16 12:56:06.379325 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 16 12:56:06.379554 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Dec 16 12:56:06.379779 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Dec 16 12:56:06.380057 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 16 12:56:06.380301 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Dec 16 12:56:06.380548 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Dec 16 12:56:06.380776 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Dec 16 12:56:06.381021 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 16 12:56:06.381334 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 16 12:56:06.381586 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 16 12:56:06.381826 kernel: pci_bus 0000:02: extended config space not accessible Dec 16 12:56:06.382094 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Dec 16 12:56:06.382368 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Dec 16 12:56:06.382600 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 16 12:56:06.382880 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Dec 16 12:56:06.383111 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Dec 16 12:56:06.383357 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 16 12:56:06.383606 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Dec 16 12:56:06.383937 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Dec 16 12:56:06.384189 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 16 12:56:06.384493 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 16 12:56:06.384722 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 16 12:56:06.384972 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 16 12:56:06.385201 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 16 12:56:06.385446 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 16 12:56:06.385485 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 12:56:06.385501 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 12:56:06.385515 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 12:56:06.385529 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 12:56:06.385543 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 16 12:56:06.385565 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 16 12:56:06.385580 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 16 12:56:06.385604 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 16 12:56:06.385618 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 16 12:56:06.385632 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 16 12:56:06.385646 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 16 12:56:06.385660 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 16 12:56:06.385674 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 16 12:56:06.385688 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 16 12:56:06.385712 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 16 12:56:06.385726 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 16 12:56:06.385740 kernel: iommu: Default domain type: Translated Dec 16 12:56:06.385754 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 12:56:06.385768 kernel: PCI: Using ACPI for IRQ routing Dec 16 12:56:06.385781 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 12:56:06.385795 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 16 12:56:06.385809 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Dec 16 12:56:06.386059 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 16 12:56:06.386336 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 16 12:56:06.386563 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 12:56:06.386584 kernel: vgaarb: loaded Dec 16 12:56:06.386599 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 12:56:06.386612 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 12:56:06.386642 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 12:56:06.386657 kernel: pnp: PnP ACPI init Dec 16 12:56:06.386919 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 16 12:56:06.386942 kernel: pnp: PnP ACPI: found 5 devices Dec 16 12:56:06.386957 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 12:56:06.386971 kernel: NET: Registered PF_INET protocol family Dec 16 12:56:06.386985 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 12:56:06.387015 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 16 12:56:06.387029 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 12:56:06.387043 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 12:56:06.387057 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 16 12:56:06.387071 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 16 12:56:06.387085 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 12:56:06.387105 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 12:56:06.387130 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 12:56:06.387144 kernel: NET: Registered PF_XDP protocol family Dec 16 12:56:06.387407 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Dec 16 12:56:06.387633 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 16 12:56:06.387874 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 16 12:56:06.388101 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 16 12:56:06.388360 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 16 12:56:06.388584 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 16 12:56:06.388808 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 16 12:56:06.389057 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 16 12:56:06.389303 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Dec 16 12:56:06.389527 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Dec 16 12:56:06.389751 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Dec 16 12:56:06.390010 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Dec 16 12:56:06.390259 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Dec 16 12:56:06.390485 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Dec 16 12:56:06.390708 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Dec 16 12:56:06.390945 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Dec 16 12:56:06.391178 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 16 12:56:06.391490 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 16 12:56:06.391715 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 16 12:56:06.391963 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 16 12:56:06.392187 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 16 12:56:06.392427 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 12:56:06.392650 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 16 12:56:06.392898 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 16 12:56:06.393141 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 16 12:56:06.393391 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 12:56:06.393616 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 16 12:56:06.393853 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 16 12:56:06.394078 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 16 12:56:06.394344 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 12:56:06.394572 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 16 12:56:06.394795 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 16 12:56:06.395033 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 16 12:56:06.395275 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 12:56:06.395518 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 16 12:56:06.395743 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 16 12:56:06.395983 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 16 12:56:06.396209 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 12:56:06.396460 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 16 12:56:06.396683 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 16 12:56:06.396939 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 16 12:56:06.397165 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 12:56:06.397412 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 16 12:56:06.397635 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 16 12:56:06.397875 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 16 12:56:06.398100 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 12:56:06.398345 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 16 12:56:06.398568 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 16 12:56:06.398812 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 16 12:56:06.399050 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 12:56:06.399290 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 12:56:06.399501 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 12:56:06.399710 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 12:56:06.399934 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 16 12:56:06.400141 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 16 12:56:06.400392 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Dec 16 12:56:06.400625 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 16 12:56:06.400852 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Dec 16 12:56:06.401066 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 12:56:06.401311 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 16 12:56:06.401559 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Dec 16 12:56:06.401772 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 16 12:56:06.402007 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 12:56:06.402256 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Dec 16 12:56:06.402472 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 16 12:56:06.402689 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 12:56:06.402950 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Dec 16 12:56:06.403165 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 16 12:56:06.403405 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 12:56:06.403631 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Dec 16 12:56:06.403857 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 16 12:56:06.404089 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 12:56:06.404347 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Dec 16 12:56:06.404562 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 16 12:56:06.404773 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 12:56:06.405035 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Dec 16 12:56:06.405273 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Dec 16 12:56:06.405517 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 12:56:06.405749 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Dec 16 12:56:06.405982 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 16 12:56:06.406194 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 12:56:06.406217 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 16 12:56:06.406252 kernel: PCI: CLS 0 bytes, default 64 Dec 16 12:56:06.406282 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 16 12:56:06.406298 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Dec 16 12:56:06.406313 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 16 12:56:06.406334 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 16 12:56:06.406349 kernel: Initialise system trusted keyrings Dec 16 12:56:06.406364 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 16 12:56:06.406378 kernel: Key type asymmetric registered Dec 16 12:56:06.406403 kernel: Asymmetric key parser 'x509' registered Dec 16 12:56:06.406418 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 12:56:06.406433 kernel: io scheduler mq-deadline registered Dec 16 12:56:06.406447 kernel: io scheduler kyber registered Dec 16 12:56:06.406462 kernel: io scheduler bfq registered Dec 16 12:56:06.406696 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 16 12:56:06.406938 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 16 12:56:06.407183 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:56:06.407436 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 16 12:56:06.407662 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 16 12:56:06.407902 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:56:06.408131 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 16 12:56:06.408394 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 16 12:56:06.408620 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:56:06.408862 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 16 12:56:06.409089 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 16 12:56:06.409338 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:56:06.409585 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 16 12:56:06.409812 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 16 12:56:06.410052 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:56:06.410301 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 16 12:56:06.410526 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 16 12:56:06.410770 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:56:06.411015 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 16 12:56:06.411260 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 16 12:56:06.411487 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:56:06.411715 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 16 12:56:06.411973 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 16 12:56:06.412200 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:56:06.412238 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 12:56:06.412257 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 16 12:56:06.412272 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 16 12:56:06.412287 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 12:56:06.412317 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 12:56:06.412338 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 12:56:06.412353 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 12:56:06.412368 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 12:56:06.412383 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 12:56:06.412632 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 16 12:56:06.412879 kernel: rtc_cmos 00:03: registered as rtc0 Dec 16 12:56:06.413119 kernel: rtc_cmos 00:03: setting system clock to 2025-12-16T12:56:04 UTC (1765889764) Dec 16 12:56:06.413355 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 16 12:56:06.413378 kernel: intel_pstate: CPU model not supported Dec 16 12:56:06.413393 kernel: NET: Registered PF_INET6 protocol family Dec 16 12:56:06.413407 kernel: Segment Routing with IPv6 Dec 16 12:56:06.413421 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 12:56:06.413436 kernel: NET: Registered PF_PACKET protocol family Dec 16 12:56:06.413468 kernel: Key type dns_resolver registered Dec 16 12:56:06.413482 kernel: IPI shorthand broadcast: enabled Dec 16 12:56:06.413498 kernel: sched_clock: Marking stable (2257004548, 227151348)->(2614295384, -130139488) Dec 16 12:56:06.413512 kernel: registered taskstats version 1 Dec 16 12:56:06.413527 kernel: Loading compiled-in X.509 certificates Dec 16 12:56:06.413553 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: aafd1eb27ea805b8231c3bede9210239fae84df8' Dec 16 12:56:06.413568 kernel: Demotion targets for Node 0: null Dec 16 12:56:06.413593 kernel: Key type .fscrypt registered Dec 16 12:56:06.413607 kernel: Key type fscrypt-provisioning registered Dec 16 12:56:06.413622 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 12:56:06.413637 kernel: ima: Allocated hash algorithm: sha1 Dec 16 12:56:06.413651 kernel: ima: No architecture policies found Dec 16 12:56:06.413665 kernel: clk: Disabling unused clocks Dec 16 12:56:06.413679 kernel: Freeing unused kernel image (initmem) memory: 15556K Dec 16 12:56:06.413705 kernel: Write protecting the kernel read-only data: 47104k Dec 16 12:56:06.413720 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Dec 16 12:56:06.413734 kernel: Run /init as init process Dec 16 12:56:06.413749 kernel: with arguments: Dec 16 12:56:06.413764 kernel: /init Dec 16 12:56:06.413778 kernel: with environment: Dec 16 12:56:06.413792 kernel: HOME=/ Dec 16 12:56:06.413817 kernel: TERM=linux Dec 16 12:56:06.413846 kernel: ACPI: bus type USB registered Dec 16 12:56:06.413861 kernel: usbcore: registered new interface driver usbfs Dec 16 12:56:06.413875 kernel: usbcore: registered new interface driver hub Dec 16 12:56:06.413890 kernel: usbcore: registered new device driver usb Dec 16 12:56:06.414127 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 16 12:56:06.414388 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Dec 16 12:56:06.414638 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 16 12:56:06.414884 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 16 12:56:06.415116 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Dec 16 12:56:06.415366 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Dec 16 12:56:06.415665 kernel: hub 1-0:1.0: USB hub found Dec 16 12:56:06.415928 kernel: hub 1-0:1.0: 4 ports detected Dec 16 12:56:06.416212 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 16 12:56:06.416500 kernel: hub 2-0:1.0: USB hub found Dec 16 12:56:06.416745 kernel: hub 2-0:1.0: 4 ports detected Dec 16 12:56:06.416768 kernel: SCSI subsystem initialized Dec 16 12:56:06.416783 kernel: libata version 3.00 loaded. Dec 16 12:56:06.417025 kernel: ahci 0000:00:1f.2: version 3.0 Dec 16 12:56:06.417066 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 16 12:56:06.417309 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 16 12:56:06.417536 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 16 12:56:06.417761 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 16 12:56:06.418032 kernel: scsi host0: ahci Dec 16 12:56:06.418328 kernel: scsi host1: ahci Dec 16 12:56:06.418575 kernel: scsi host2: ahci Dec 16 12:56:06.418817 kernel: scsi host3: ahci Dec 16 12:56:06.419073 kernel: scsi host4: ahci Dec 16 12:56:06.419340 kernel: scsi host5: ahci Dec 16 12:56:06.419364 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 35 lpm-pol 1 Dec 16 12:56:06.419396 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 35 lpm-pol 1 Dec 16 12:56:06.419412 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 35 lpm-pol 1 Dec 16 12:56:06.419427 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 35 lpm-pol 1 Dec 16 12:56:06.419441 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 35 lpm-pol 1 Dec 16 12:56:06.419456 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 35 lpm-pol 1 Dec 16 12:56:06.419726 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 16 12:56:06.419766 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 12:56:06.419781 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 16 12:56:06.419796 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 16 12:56:06.419811 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 16 12:56:06.419825 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 16 12:56:06.419852 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 16 12:56:06.419866 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 16 12:56:06.420132 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Dec 16 12:56:06.420157 kernel: usbcore: registered new interface driver usbhid Dec 16 12:56:06.420402 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 16 12:56:06.420425 kernel: usbhid: USB HID core driver Dec 16 12:56:06.420440 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 12:56:06.420455 kernel: GPT:25804799 != 125829119 Dec 16 12:56:06.420486 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 12:56:06.420501 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 16 12:56:06.420515 kernel: GPT:25804799 != 125829119 Dec 16 12:56:06.420816 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Dec 16 12:56:06.420853 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 12:56:06.420868 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:56:06.420897 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 12:56:06.420913 kernel: device-mapper: uevent: version 1.0.3 Dec 16 12:56:06.420928 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 12:56:06.420943 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 16 12:56:06.420957 kernel: raid6: sse2x4 gen() 13571 MB/s Dec 16 12:56:06.420982 kernel: raid6: sse2x2 gen() 9477 MB/s Dec 16 12:56:06.420998 kernel: raid6: sse2x1 gen() 9442 MB/s Dec 16 12:56:06.421023 kernel: raid6: using algorithm sse2x4 gen() 13571 MB/s Dec 16 12:56:06.421038 kernel: raid6: .... xor() 7660 MB/s, rmw enabled Dec 16 12:56:06.421053 kernel: raid6: using ssse3x2 recovery algorithm Dec 16 12:56:06.421068 kernel: xor: automatically using best checksumming function avx Dec 16 12:56:06.421083 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 12:56:06.421098 kernel: BTRFS: device fsid 57a8262f-2900-48ba-a17e-aafbd70d59c7 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (193) Dec 16 12:56:06.421112 kernel: BTRFS info (device dm-0): first mount of filesystem 57a8262f-2900-48ba-a17e-aafbd70d59c7 Dec 16 12:56:06.421138 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 12:56:06.421153 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 12:56:06.421169 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 12:56:06.421183 kernel: loop: module loaded Dec 16 12:56:06.421198 kernel: loop0: detected capacity change from 0 to 100528 Dec 16 12:56:06.421213 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 12:56:06.421247 systemd[1]: Successfully made /usr/ read-only. Dec 16 12:56:06.421281 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:56:06.421298 systemd[1]: Detected virtualization kvm. Dec 16 12:56:06.421314 systemd[1]: Detected architecture x86-64. Dec 16 12:56:06.421329 systemd[1]: Running in initrd. Dec 16 12:56:06.421344 systemd[1]: No hostname configured, using default hostname. Dec 16 12:56:06.421360 systemd[1]: Hostname set to . Dec 16 12:56:06.421387 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 12:56:06.421404 systemd[1]: Queued start job for default target initrd.target. Dec 16 12:56:06.421419 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:56:06.421435 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:56:06.421451 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:56:06.421468 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 12:56:06.421494 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:56:06.421513 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 12:56:06.421529 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 12:56:06.421544 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:56:06.421560 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:56:06.421576 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:56:06.421603 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:56:06.421619 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:56:06.421635 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:56:06.421651 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:56:06.421666 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:56:06.421682 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:56:06.421697 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:56:06.421724 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 12:56:06.421740 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 12:56:06.421756 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:56:06.421772 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:56:06.421787 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:56:06.421803 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:56:06.421819 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 12:56:06.421860 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 12:56:06.421877 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:56:06.421893 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 12:56:06.421909 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 12:56:06.421925 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 12:56:06.421941 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:56:06.421969 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:56:06.421987 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:56:06.422003 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 12:56:06.422019 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:56:06.422051 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 12:56:06.422068 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:56:06.422150 systemd-journald[332]: Collecting audit messages is enabled. Dec 16 12:56:06.422198 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 12:56:06.422216 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:56:06.422254 kernel: Bridge firewalling registered Dec 16 12:56:06.422271 systemd-journald[332]: Journal started Dec 16 12:56:06.422300 systemd-journald[332]: Runtime Journal (/run/log/journal/b7a26b31e2ef49718c4dbc007a45a728) is 4.7M, max 37.7M, 33M free. Dec 16 12:56:06.384019 systemd-modules-load[334]: Inserted module 'br_netfilter' Dec 16 12:56:06.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:06.457450 kernel: audit: type=1130 audit(1765889766.449:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:06.457566 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:56:06.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:06.459340 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:56:06.465386 kernel: audit: type=1130 audit(1765889766.458:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:06.465427 kernel: audit: type=1130 audit(1765889766.464:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:06.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:06.470132 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:56:06.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:06.477310 kernel: audit: type=1130 audit(1765889766.470:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:06.477446 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 12:56:06.482453 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:56:06.491506 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:56:06.496429 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:56:06.515157 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:56:06.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:06.522283 kernel: audit: type=1130 audit(1765889766.515:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:06.526390 systemd-tmpfiles[352]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 12:56:06.529425 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:56:06.526000 audit: BPF prog-id=6 op=LOAD Dec 16 12:56:06.534268 kernel: audit: type=1334 audit(1765889766.526:7): prog-id=6 op=LOAD Dec 16 12:56:06.539464 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:56:06.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:06.550253 kernel: audit: type=1130 audit(1765889766.541:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:06.560659 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:56:06.567693 kernel: audit: type=1130 audit(1765889766.561:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:06.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:06.562683 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:56:06.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:06.576408 kernel: audit: type=1130 audit(1765889766.569:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:06.580431 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 12:56:06.611996 dracut-cmdline[371]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 12:56:06.636146 systemd-resolved[360]: Positive Trust Anchors: Dec 16 12:56:06.643120 systemd-resolved[360]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:56:06.643131 systemd-resolved[360]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 12:56:06.643182 systemd-resolved[360]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:56:06.684339 systemd-resolved[360]: Defaulting to hostname 'linux'. Dec 16 12:56:06.686071 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:56:06.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:06.688749 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:56:06.756285 kernel: Loading iSCSI transport class v2.0-870. Dec 16 12:56:06.776368 kernel: iscsi: registered transport (tcp) Dec 16 12:56:06.806967 kernel: iscsi: registered transport (qla4xxx) Dec 16 12:56:06.807085 kernel: QLogic iSCSI HBA Driver Dec 16 12:56:06.844641 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:56:06.876879 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:56:06.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:06.880909 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:56:06.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:06.946926 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 12:56:06.951308 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 12:56:06.955400 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 12:56:07.001217 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:56:07.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:07.004000 audit: BPF prog-id=7 op=LOAD Dec 16 12:56:07.004000 audit: BPF prog-id=8 op=LOAD Dec 16 12:56:07.005483 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:56:07.042532 systemd-udevd[611]: Using default interface naming scheme 'v257'. Dec 16 12:56:07.059419 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:56:07.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:07.063411 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 12:56:07.102189 dracut-pre-trigger[674]: rd.md=0: removing MD RAID activation Dec 16 12:56:07.110661 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:56:07.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:07.113000 audit: BPF prog-id=9 op=LOAD Dec 16 12:56:07.116845 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:56:07.151389 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:56:07.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:07.156419 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:56:07.182619 systemd-networkd[722]: lo: Link UP Dec 16 12:56:07.182633 systemd-networkd[722]: lo: Gained carrier Dec 16 12:56:07.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:07.186418 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:56:07.187381 systemd[1]: Reached target network.target - Network. Dec 16 12:56:07.312562 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:56:07.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:07.316982 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 12:56:07.457761 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 12:56:07.498251 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 12:56:07.521333 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 12:56:07.538291 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 12:56:07.540869 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 12:56:07.572249 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 12:56:07.619004 disk-uuid[775]: Primary Header is updated. Dec 16 12:56:07.619004 disk-uuid[775]: Secondary Entries is updated. Dec 16 12:56:07.619004 disk-uuid[775]: Secondary Header is updated. Dec 16 12:56:07.638167 kernel: AES CTR mode by8 optimization enabled Dec 16 12:56:07.638286 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 16 12:56:07.714347 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:56:07.714546 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:56:07.728026 kernel: kauditd_printk_skb: 12 callbacks suppressed Dec 16 12:56:07.728067 kernel: audit: type=1131 audit(1765889767.715:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:07.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:07.716188 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:56:07.731496 systemd-networkd[722]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:56:07.731510 systemd-networkd[722]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:56:07.733286 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:56:07.735130 systemd-networkd[722]: eth0: Link UP Dec 16 12:56:07.736470 systemd-networkd[722]: eth0: Gained carrier Dec 16 12:56:07.736491 systemd-networkd[722]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:56:07.760349 systemd-networkd[722]: eth0: DHCPv4 address 10.244.27.222/30, gateway 10.244.27.221 acquired from 10.244.27.221 Dec 16 12:56:07.826544 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 12:56:07.885971 kernel: audit: type=1130 audit(1765889767.873:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:07.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:07.876095 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:56:07.876994 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:56:07.877732 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:56:07.880160 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 12:56:07.897559 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:56:07.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:07.906306 kernel: audit: type=1130 audit(1765889767.899:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:07.912255 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:56:07.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:07.919318 kernel: audit: type=1130 audit(1765889767.912:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:08.697073 disk-uuid[777]: Warning: The kernel is still using the old partition table. Dec 16 12:56:08.697073 disk-uuid[777]: The new table will be used at the next reboot or after you Dec 16 12:56:08.697073 disk-uuid[777]: run partprobe(8) or kpartx(8) Dec 16 12:56:08.697073 disk-uuid[777]: The operation has completed successfully. Dec 16 12:56:08.704470 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 12:56:08.731725 kernel: audit: type=1130 audit(1765889768.710:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:08.731772 kernel: audit: type=1131 audit(1765889768.710:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:08.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:08.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:08.704659 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 12:56:08.713557 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 12:56:08.766281 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (856) Dec 16 12:56:08.770721 kernel: BTRFS info (device vda6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 12:56:08.770851 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 12:56:08.780199 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:56:08.780352 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:56:08.789277 kernel: BTRFS info (device vda6): last unmount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 12:56:08.790920 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 12:56:08.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:08.797472 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 12:56:08.799385 kernel: audit: type=1130 audit(1765889768.791:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:09.043141 ignition[875]: Ignition 2.24.0 Dec 16 12:56:09.043169 ignition[875]: Stage: fetch-offline Dec 16 12:56:09.043363 ignition[875]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:56:09.043388 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 12:56:09.046550 ignition[875]: parsed url from cmdline: "" Dec 16 12:56:09.047188 ignition[875]: no config URL provided Dec 16 12:56:09.047213 ignition[875]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:56:09.048048 ignition[875]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:56:09.048061 ignition[875]: failed to fetch config: resource requires networking Dec 16 12:56:09.049341 ignition[875]: Ignition finished successfully Dec 16 12:56:09.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:09.051911 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:56:09.060747 kernel: audit: type=1130 audit(1765889769.052:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:09.059757 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 12:56:09.090885 ignition[881]: Ignition 2.24.0 Dec 16 12:56:09.090913 ignition[881]: Stage: fetch Dec 16 12:56:09.091246 ignition[881]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:56:09.091272 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 12:56:09.091443 ignition[881]: parsed url from cmdline: "" Dec 16 12:56:09.091451 ignition[881]: no config URL provided Dec 16 12:56:09.091461 ignition[881]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:56:09.091497 ignition[881]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:56:09.091714 ignition[881]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 16 12:56:09.092186 ignition[881]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 16 12:56:09.092245 ignition[881]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 16 12:56:09.111006 ignition[881]: GET result: OK Dec 16 12:56:09.111380 ignition[881]: parsing config with SHA512: 7c250ef6756a69b9115b023fb47687c89d9873cbe0f62437fb08251f3f80d1c6f34d21dea869fe7de18706abf61a36710a973872c04c6f5cc3329acc0a9122db Dec 16 12:56:09.123069 unknown[881]: fetched base config from "system" Dec 16 12:56:09.123089 unknown[881]: fetched base config from "system" Dec 16 12:56:09.123697 ignition[881]: fetch: fetch complete Dec 16 12:56:09.123099 unknown[881]: fetched user config from "openstack" Dec 16 12:56:09.123710 ignition[881]: fetch: fetch passed Dec 16 12:56:09.123884 ignition[881]: Ignition finished successfully Dec 16 12:56:09.127758 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 12:56:09.135681 kernel: audit: type=1130 audit(1765889769.128:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:09.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:09.131410 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 12:56:09.176427 ignition[887]: Ignition 2.24.0 Dec 16 12:56:09.176453 ignition[887]: Stage: kargs Dec 16 12:56:09.176708 ignition[887]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:56:09.179807 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 12:56:09.187511 kernel: audit: type=1130 audit(1765889769.180:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:09.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:09.176726 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 12:56:09.184424 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 12:56:09.178114 ignition[887]: kargs: kargs passed Dec 16 12:56:09.178190 ignition[887]: Ignition finished successfully Dec 16 12:56:09.218678 ignition[893]: Ignition 2.24.0 Dec 16 12:56:09.218705 ignition[893]: Stage: disks Dec 16 12:56:09.219004 ignition[893]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:56:09.219023 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 12:56:09.220615 ignition[893]: disks: disks passed Dec 16 12:56:09.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:09.222836 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 12:56:09.220704 ignition[893]: Ignition finished successfully Dec 16 12:56:09.225198 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 12:56:09.228057 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 12:56:09.229490 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:56:09.232801 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:56:09.233522 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:56:09.236478 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 12:56:09.293726 systemd-fsck[901]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Dec 16 12:56:09.307471 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 12:56:09.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:09.313383 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 12:56:09.473251 kernel: EXT4-fs (vda9): mounted filesystem 1314c107-11a5-486b-9d52-be9f57b6bf1b r/w with ordered data mode. Quota mode: none. Dec 16 12:56:09.474454 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 12:56:09.475785 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 12:56:09.478579 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:56:09.481067 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 12:56:09.483110 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 12:56:09.487496 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Dec 16 12:56:09.489689 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 12:56:09.489755 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:56:09.503094 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 12:56:09.506511 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 12:56:09.518724 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (909) Dec 16 12:56:09.527208 systemd-networkd[722]: eth0: Gained IPv6LL Dec 16 12:56:09.546437 kernel: BTRFS info (device vda6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 12:56:09.546485 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 12:56:09.546509 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:56:09.546571 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:56:09.545814 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:56:09.625299 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 12:56:09.796541 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 12:56:09.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:09.799826 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 12:56:09.801534 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 12:56:09.826394 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 12:56:09.829855 kernel: BTRFS info (device vda6): last unmount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 12:56:09.846340 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 12:56:09.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:09.864079 ignition[1012]: INFO : Ignition 2.24.0 Dec 16 12:56:09.864079 ignition[1012]: INFO : Stage: mount Dec 16 12:56:09.867027 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:56:09.867027 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 12:56:09.867027 ignition[1012]: INFO : mount: mount passed Dec 16 12:56:09.867027 ignition[1012]: INFO : Ignition finished successfully Dec 16 12:56:09.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:09.869411 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 12:56:10.662307 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 12:56:11.036966 systemd-networkd[722]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:6f7:24:19ff:fef4:1bde/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:6f7:24:19ff:fef4:1bde/64 assigned by NDisc. Dec 16 12:56:11.036983 systemd-networkd[722]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 16 12:56:12.671260 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 12:56:16.685268 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 12:56:16.691715 coreos-metadata[911]: Dec 16 12:56:16.691 WARN failed to locate config-drive, using the metadata service API instead Dec 16 12:56:16.718177 coreos-metadata[911]: Dec 16 12:56:16.718 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 16 12:56:16.738822 coreos-metadata[911]: Dec 16 12:56:16.738 INFO Fetch successful Dec 16 12:56:16.738822 coreos-metadata[911]: Dec 16 12:56:16.738 INFO wrote hostname srv-gtzk5.gb1.brightbox.com to /sysroot/etc/hostname Dec 16 12:56:16.741512 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 16 12:56:16.751343 kernel: kauditd_printk_skb: 5 callbacks suppressed Dec 16 12:56:16.751379 kernel: audit: type=1130 audit(1765889776.743:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:16.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:16.741748 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Dec 16 12:56:16.758336 kernel: audit: type=1131 audit(1765889776.743:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:16.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:16.747344 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 12:56:16.773545 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:56:16.803242 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1028) Dec 16 12:56:16.803309 kernel: BTRFS info (device vda6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 12:56:16.804261 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 12:56:16.810400 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:56:16.810446 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:56:16.814384 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:56:16.852456 ignition[1046]: INFO : Ignition 2.24.0 Dec 16 12:56:16.852456 ignition[1046]: INFO : Stage: files Dec 16 12:56:16.854321 ignition[1046]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:56:16.854321 ignition[1046]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 12:56:16.854321 ignition[1046]: DEBUG : files: compiled without relabeling support, skipping Dec 16 12:56:16.857035 ignition[1046]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 12:56:16.857035 ignition[1046]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 12:56:16.861318 ignition[1046]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 12:56:16.862526 ignition[1046]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 12:56:16.863956 unknown[1046]: wrote ssh authorized keys file for user: core Dec 16 12:56:16.865020 ignition[1046]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 12:56:16.866600 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 12:56:16.867893 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 16 12:56:17.067145 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 12:56:17.391174 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 12:56:17.392689 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 12:56:17.393800 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 12:56:17.393800 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:56:17.393800 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:56:17.393800 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:56:17.393800 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:56:17.393800 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:56:17.393800 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:56:17.402047 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:56:17.402047 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:56:17.402047 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 12:56:17.402047 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 12:56:17.402047 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 12:56:17.402047 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Dec 16 12:56:17.981402 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 12:56:20.294901 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 12:56:20.294901 ignition[1046]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 12:56:20.298060 ignition[1046]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:56:20.300957 ignition[1046]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:56:20.300957 ignition[1046]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 12:56:20.300957 ignition[1046]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 12:56:20.304543 ignition[1046]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 12:56:20.304543 ignition[1046]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:56:20.304543 ignition[1046]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:56:20.304543 ignition[1046]: INFO : files: files passed Dec 16 12:56:20.304543 ignition[1046]: INFO : Ignition finished successfully Dec 16 12:56:20.317717 kernel: audit: type=1130 audit(1765889780.306:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.303485 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 12:56:20.308775 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 12:56:20.321058 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 12:56:20.327568 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 12:56:20.327796 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 12:56:20.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.336498 kernel: audit: type=1130 audit(1765889780.328:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.336555 kernel: audit: type=1131 audit(1765889780.328:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.357027 initrd-setup-root-after-ignition[1077]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:56:20.357027 initrd-setup-root-after-ignition[1077]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:56:20.361407 initrd-setup-root-after-ignition[1081]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:56:20.363300 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:56:20.370156 kernel: audit: type=1130 audit(1765889780.363:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.364937 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 12:56:20.372249 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 12:56:20.444499 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 12:56:20.444748 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 12:56:20.457054 kernel: audit: type=1130 audit(1765889780.445:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.457104 kernel: audit: type=1131 audit(1765889780.446:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.446983 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 12:56:20.457813 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 12:56:20.459669 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 12:56:20.462454 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 12:56:20.496123 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:56:20.508677 kernel: audit: type=1130 audit(1765889780.502:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.506517 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 12:56:20.544203 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:56:20.545766 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:56:20.546681 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:56:20.548440 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 12:56:20.550093 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 12:56:20.557163 kernel: audit: type=1131 audit(1765889780.551:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.550400 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:56:20.557029 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 12:56:20.558008 systemd[1]: Stopped target basic.target - Basic System. Dec 16 12:56:20.559643 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 12:56:20.560987 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:56:20.562414 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 12:56:20.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.564138 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:56:20.565678 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 12:56:20.566450 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:56:20.567430 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 12:56:20.568307 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 12:56:20.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.569153 systemd[1]: Stopped target swap.target - Swaps. Dec 16 12:56:20.569907 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 12:56:20.570204 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:56:20.571967 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:56:20.573014 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:56:20.574781 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 12:56:20.575011 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:56:20.576414 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 12:56:20.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.576727 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 12:56:20.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.578392 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 12:56:20.578673 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:56:20.579791 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 12:56:20.580047 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 12:56:20.583541 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 12:56:20.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.589548 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 12:56:20.591430 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 12:56:20.592023 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:56:20.594650 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 12:56:20.595147 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:56:20.596671 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 12:56:20.598272 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:56:20.608391 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 12:56:20.609336 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 12:56:20.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.629369 ignition[1101]: INFO : Ignition 2.24.0 Dec 16 12:56:20.629369 ignition[1101]: INFO : Stage: umount Dec 16 12:56:20.629369 ignition[1101]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:56:20.629369 ignition[1101]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 12:56:20.629369 ignition[1101]: INFO : umount: umount passed Dec 16 12:56:20.629369 ignition[1101]: INFO : Ignition finished successfully Dec 16 12:56:20.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.627449 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 12:56:20.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.627670 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 12:56:20.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.629008 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 12:56:20.629189 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 12:56:20.633984 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 12:56:20.634064 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 12:56:20.636353 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 12:56:20.636445 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 12:56:20.637183 systemd[1]: Stopped target network.target - Network. Dec 16 12:56:20.637797 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 12:56:20.637881 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:56:20.639381 systemd[1]: Stopped target paths.target - Path Units. Dec 16 12:56:20.641202 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 12:56:20.645302 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:56:20.646119 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 12:56:20.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.647428 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 12:56:20.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.648913 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 12:56:20.648987 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:56:20.650418 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 12:56:20.650495 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:56:20.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.651762 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 12:56:20.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.651815 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:56:20.653331 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 12:56:20.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.653425 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 12:56:20.655119 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 12:56:20.655189 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 12:56:20.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.656740 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 12:56:20.658716 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 12:56:20.664867 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 12:56:20.682000 audit: BPF prog-id=9 op=UNLOAD Dec 16 12:56:20.682000 audit: BPF prog-id=6 op=UNLOAD Dec 16 12:56:20.666140 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 12:56:20.666370 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 12:56:20.668682 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 12:56:20.668820 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 12:56:20.674158 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 12:56:20.674371 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 12:56:20.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.677815 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 12:56:20.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.677992 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 12:56:20.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.682664 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 12:56:20.683799 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 12:56:20.683899 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:56:20.687361 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 12:56:20.688264 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 12:56:20.688356 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:56:20.696701 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:56:20.696795 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:56:20.698237 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 12:56:20.698327 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 12:56:20.699823 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:56:20.716402 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 12:56:20.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.718332 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:56:20.720147 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 12:56:20.722027 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 12:56:20.724090 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 12:56:20.724171 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:56:20.725870 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 12:56:20.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.725961 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:56:20.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.727935 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 12:56:20.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.728014 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 12:56:20.731820 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 12:56:20.731916 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:56:20.734814 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 12:56:20.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.738608 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 12:56:20.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.738717 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:56:20.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.740478 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 12:56:20.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.740559 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:56:20.741491 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 12:56:20.741566 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:56:20.742460 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 12:56:20.742531 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:56:20.744130 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:56:20.744209 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:56:20.746881 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 12:56:20.747061 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 12:56:20.764827 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 12:56:20.765022 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 12:56:20.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.767446 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 12:56:20.770131 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 12:56:20.793663 systemd[1]: Switching root. Dec 16 12:56:20.830356 systemd-journald[332]: Journal stopped Dec 16 12:56:22.728626 systemd-journald[332]: Received SIGTERM from PID 1 (systemd). Dec 16 12:56:22.728759 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 12:56:22.728823 kernel: SELinux: policy capability open_perms=1 Dec 16 12:56:22.728847 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 12:56:22.728874 kernel: SELinux: policy capability always_check_network=0 Dec 16 12:56:22.728903 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 12:56:22.728925 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 12:56:22.728945 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 12:56:22.728967 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 12:56:22.728998 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 12:56:22.729028 systemd[1]: Successfully loaded SELinux policy in 85.899ms. Dec 16 12:56:22.729074 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.168ms. Dec 16 12:56:22.729101 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:56:22.729123 systemd[1]: Detected virtualization kvm. Dec 16 12:56:22.729166 systemd[1]: Detected architecture x86-64. Dec 16 12:56:22.729191 systemd[1]: Detected first boot. Dec 16 12:56:22.729252 systemd[1]: Hostname set to . Dec 16 12:56:22.729279 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 12:56:22.729346 zram_generator::config[1147]: No configuration found. Dec 16 12:56:22.729423 kernel: Guest personality initialized and is inactive Dec 16 12:56:22.729449 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 12:56:22.729471 kernel: Initialized host personality Dec 16 12:56:22.729500 kernel: NET: Registered PF_VSOCK protocol family Dec 16 12:56:22.729537 systemd[1]: Populated /etc with preset unit settings. Dec 16 12:56:22.729573 kernel: kauditd_printk_skb: 43 callbacks suppressed Dec 16 12:56:22.729605 kernel: audit: type=1334 audit(1765889782.203:91): prog-id=12 op=LOAD Dec 16 12:56:22.729627 kernel: audit: type=1334 audit(1765889782.204:92): prog-id=3 op=UNLOAD Dec 16 12:56:22.729649 kernel: audit: type=1334 audit(1765889782.204:93): prog-id=13 op=LOAD Dec 16 12:56:22.729668 kernel: audit: type=1334 audit(1765889782.204:94): prog-id=14 op=LOAD Dec 16 12:56:22.729695 kernel: audit: type=1334 audit(1765889782.204:95): prog-id=4 op=UNLOAD Dec 16 12:56:22.729732 kernel: audit: type=1334 audit(1765889782.204:96): prog-id=5 op=UNLOAD Dec 16 12:56:22.729756 kernel: audit: type=1131 audit(1765889782.207:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:22.729778 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 12:56:22.729801 kernel: audit: type=1334 audit(1765889782.219:98): prog-id=12 op=UNLOAD Dec 16 12:56:22.729821 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 12:56:22.729843 kernel: audit: type=1130 audit(1765889782.223:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:22.729878 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 12:56:22.729905 kernel: audit: type=1131 audit(1765889782.223:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:22.729935 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 12:56:22.729960 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 12:56:22.729984 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 12:56:22.730032 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 12:56:22.730057 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 12:56:22.730080 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 12:56:22.730104 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 12:56:22.730126 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 12:56:22.730149 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:56:22.730183 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:56:22.730208 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 12:56:22.730320 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 12:56:22.730350 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 12:56:22.730373 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:56:22.730397 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 12:56:22.730418 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:56:22.730455 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:56:22.730485 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 12:56:22.730516 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 12:56:22.730539 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 12:56:22.730576 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 12:56:22.730608 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:56:22.730683 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:56:22.730712 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 12:56:22.730751 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:56:22.730775 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:56:22.730797 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 12:56:22.730826 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 12:56:22.730859 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 12:56:22.730885 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:56:22.730907 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 12:56:22.730950 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:56:22.730983 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 12:56:22.731006 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 12:56:22.731083 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:56:22.731119 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:56:22.731142 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 12:56:22.731164 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 12:56:22.731202 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 12:56:22.731248 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 12:56:22.731275 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:56:22.731297 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 12:56:22.731327 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 12:56:22.731350 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 12:56:22.731374 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 12:56:22.731409 systemd[1]: Reached target machines.target - Containers. Dec 16 12:56:22.731433 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 12:56:22.731455 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:56:22.731478 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:56:22.731500 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 12:56:22.731529 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:56:22.731604 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:56:22.731638 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:56:22.731661 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 12:56:22.731691 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:56:22.731715 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 12:56:22.731749 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 12:56:22.731773 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 12:56:22.731815 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 12:56:22.731839 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 12:56:22.731870 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:56:22.731894 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:56:22.731932 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:56:22.731957 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:56:22.731979 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 12:56:22.732001 kernel: fuse: init (API version 7.41) Dec 16 12:56:22.732058 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 12:56:22.732086 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:56:22.732110 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:56:22.732155 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 12:56:22.732187 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 12:56:22.732211 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 12:56:22.742090 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 12:56:22.742153 kernel: ACPI: bus type drm_connector registered Dec 16 12:56:22.742180 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 12:56:22.742204 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 12:56:22.742245 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:56:22.742278 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 12:56:22.742302 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 12:56:22.742345 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:56:22.742369 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:56:22.742407 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:56:22.742433 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:56:22.742463 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:56:22.742486 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:56:22.742508 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 12:56:22.742531 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 12:56:22.742652 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:56:22.742680 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:56:22.742703 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:56:22.742736 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:56:22.742761 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 12:56:22.742783 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 12:56:22.742806 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:56:22.742850 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 12:56:22.742885 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 12:56:22.742908 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:56:22.742988 systemd-journald[1232]: Collecting audit messages is enabled. Dec 16 12:56:22.743040 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 12:56:22.743071 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:56:22.743116 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:56:22.743142 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 12:56:22.743167 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:56:22.743192 systemd-journald[1232]: Journal started Dec 16 12:56:22.752407 systemd-journald[1232]: Runtime Journal (/run/log/journal/b7a26b31e2ef49718c4dbc007a45a728) is 4.7M, max 37.7M, 33M free. Dec 16 12:56:22.752581 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 12:56:22.752646 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:56:22.328000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 12:56:22.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:22.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:22.500000 audit: BPF prog-id=14 op=UNLOAD Dec 16 12:56:22.500000 audit: BPF prog-id=13 op=UNLOAD Dec 16 12:56:22.501000 audit: BPF prog-id=15 op=LOAD Dec 16 12:56:22.504000 audit: BPF prog-id=16 op=LOAD Dec 16 12:56:22.504000 audit: BPF prog-id=17 op=LOAD Dec 16 12:56:22.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:22.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:22.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:22.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:22.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:22.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:22.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:22.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:22.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:22.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:22.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:22.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:22.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:22.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:22.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:22.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:22.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:22.716000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 12:56:22.716000 audit[1232]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd219adc50 a2=4000 a3=0 items=0 ppid=1 pid=1232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:22.716000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 12:56:22.759890 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:56:22.178400 systemd[1]: Queued start job for default target multi-user.target. Dec 16 12:56:22.205850 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 12:56:22.207683 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 12:56:22.765893 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 12:56:22.778647 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:56:22.783260 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:56:22.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:22.789876 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 12:56:22.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:22.821351 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 12:56:22.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:22.829897 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 12:56:22.843257 kernel: loop1: detected capacity change from 0 to 229808 Dec 16 12:56:22.841893 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 12:56:22.843821 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Dec 16 12:56:22.843849 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Dec 16 12:56:22.848216 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 12:56:22.876140 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:56:22.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:22.884859 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 12:56:22.893476 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:56:22.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:22.899864 systemd-journald[1232]: Time spent on flushing to /var/log/journal/b7a26b31e2ef49718c4dbc007a45a728 is 89.672ms for 1313 entries. Dec 16 12:56:22.899864 systemd-journald[1232]: System Journal (/var/log/journal/b7a26b31e2ef49718c4dbc007a45a728) is 8M, max 588.1M, 580.1M free. Dec 16 12:56:23.006705 systemd-journald[1232]: Received client request to flush runtime journal. Dec 16 12:56:23.006784 kernel: loop2: detected capacity change from 0 to 50784 Dec 16 12:56:23.006828 kernel: loop3: detected capacity change from 0 to 8 Dec 16 12:56:22.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:22.916452 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:56:23.009243 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 12:56:23.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:23.022276 kernel: loop4: detected capacity change from 0 to 111560 Dec 16 12:56:23.023971 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 12:56:23.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:23.028000 audit: BPF prog-id=18 op=LOAD Dec 16 12:56:23.028000 audit: BPF prog-id=19 op=LOAD Dec 16 12:56:23.028000 audit: BPF prog-id=20 op=LOAD Dec 16 12:56:23.030531 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 12:56:23.033000 audit: BPF prog-id=21 op=LOAD Dec 16 12:56:23.035532 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:56:23.042504 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:56:23.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:23.044773 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 12:56:23.065000 audit: BPF prog-id=22 op=LOAD Dec 16 12:56:23.067000 audit: BPF prog-id=23 op=LOAD Dec 16 12:56:23.067000 audit: BPF prog-id=24 op=LOAD Dec 16 12:56:23.070592 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 12:56:23.075000 audit: BPF prog-id=25 op=LOAD Dec 16 12:56:23.075000 audit: BPF prog-id=26 op=LOAD Dec 16 12:56:23.075000 audit: BPF prog-id=27 op=LOAD Dec 16 12:56:23.080243 kernel: loop5: detected capacity change from 0 to 229808 Dec 16 12:56:23.078560 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 12:56:23.088075 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Dec 16 12:56:23.090274 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Dec 16 12:56:23.098433 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:56:23.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:23.112301 kernel: loop6: detected capacity change from 0 to 50784 Dec 16 12:56:23.133256 kernel: loop7: detected capacity change from 0 to 8 Dec 16 12:56:23.145286 kernel: loop1: detected capacity change from 0 to 111560 Dec 16 12:56:23.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:23.165165 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 12:56:23.168516 (sd-merge)[1306]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-openstack.raw'. Dec 16 12:56:23.182081 (sd-merge)[1306]: Merged extensions into '/usr'. Dec 16 12:56:23.198495 systemd[1]: Reload requested from client PID 1262 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 12:56:23.198531 systemd[1]: Reloading... Dec 16 12:56:23.199751 systemd-nsresourced[1307]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 12:56:23.340288 zram_generator::config[1346]: No configuration found. Dec 16 12:56:23.444080 systemd-oomd[1299]: No swap; memory pressure usage will be degraded Dec 16 12:56:23.481637 systemd-resolved[1300]: Positive Trust Anchors: Dec 16 12:56:23.481661 systemd-resolved[1300]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:56:23.481669 systemd-resolved[1300]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 12:56:23.481715 systemd-resolved[1300]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:56:23.509145 systemd-resolved[1300]: Using system hostname 'srv-gtzk5.gb1.brightbox.com'. Dec 16 12:56:23.721835 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 12:56:23.722021 systemd[1]: Reloading finished in 522 ms. Dec 16 12:56:23.744028 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 12:56:23.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:23.753631 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 12:56:23.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:23.755181 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:56:23.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:23.756808 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 12:56:23.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:23.763563 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:56:23.767604 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 12:56:23.776395 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 12:56:23.791070 systemd[1]: Starting ensure-sysext.service... Dec 16 12:56:23.800435 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:56:23.812000 audit: BPF prog-id=28 op=LOAD Dec 16 12:56:23.813000 audit: BPF prog-id=15 op=UNLOAD Dec 16 12:56:23.813000 audit: BPF prog-id=29 op=LOAD Dec 16 12:56:23.814000 audit: BPF prog-id=30 op=LOAD Dec 16 12:56:23.814000 audit: BPF prog-id=16 op=UNLOAD Dec 16 12:56:23.814000 audit: BPF prog-id=17 op=UNLOAD Dec 16 12:56:23.814000 audit: BPF prog-id=31 op=LOAD Dec 16 12:56:23.814000 audit: BPF prog-id=25 op=UNLOAD Dec 16 12:56:23.816000 audit: BPF prog-id=32 op=LOAD Dec 16 12:56:23.816000 audit: BPF prog-id=33 op=LOAD Dec 16 12:56:23.816000 audit: BPF prog-id=26 op=UNLOAD Dec 16 12:56:23.816000 audit: BPF prog-id=27 op=UNLOAD Dec 16 12:56:23.819000 audit: BPF prog-id=34 op=LOAD Dec 16 12:56:23.826000 audit: BPF prog-id=18 op=UNLOAD Dec 16 12:56:23.826000 audit: BPF prog-id=35 op=LOAD Dec 16 12:56:23.826000 audit: BPF prog-id=36 op=LOAD Dec 16 12:56:23.826000 audit: BPF prog-id=19 op=UNLOAD Dec 16 12:56:23.826000 audit: BPF prog-id=20 op=UNLOAD Dec 16 12:56:23.829000 audit: BPF prog-id=37 op=LOAD Dec 16 12:56:23.829000 audit: BPF prog-id=21 op=UNLOAD Dec 16 12:56:23.830000 audit: BPF prog-id=38 op=LOAD Dec 16 12:56:23.830000 audit: BPF prog-id=22 op=UNLOAD Dec 16 12:56:23.830000 audit: BPF prog-id=39 op=LOAD Dec 16 12:56:23.830000 audit: BPF prog-id=40 op=LOAD Dec 16 12:56:23.830000 audit: BPF prog-id=23 op=UNLOAD Dec 16 12:56:23.830000 audit: BPF prog-id=24 op=UNLOAD Dec 16 12:56:23.835422 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 12:56:23.837958 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 12:56:23.852382 systemd-tmpfiles[1408]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 12:56:23.852446 systemd-tmpfiles[1408]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 12:56:23.852990 systemd-tmpfiles[1408]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 12:56:23.858400 systemd-tmpfiles[1408]: ACLs are not supported, ignoring. Dec 16 12:56:23.858515 systemd-tmpfiles[1408]: ACLs are not supported, ignoring. Dec 16 12:56:23.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:23.867463 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 12:56:23.869568 systemd[1]: Reload requested from client PID 1407 ('systemctl') (unit ensure-sysext.service)... Dec 16 12:56:23.869619 systemd[1]: Reloading... Dec 16 12:56:23.872651 systemd-tmpfiles[1408]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:56:23.872833 systemd-tmpfiles[1408]: Skipping /boot Dec 16 12:56:23.889098 systemd-tmpfiles[1408]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:56:23.889120 systemd-tmpfiles[1408]: Skipping /boot Dec 16 12:56:23.978278 zram_generator::config[1448]: No configuration found. Dec 16 12:56:24.247547 systemd[1]: Reloading finished in 377 ms. Dec 16 12:56:24.262000 audit: BPF prog-id=41 op=LOAD Dec 16 12:56:24.262000 audit: BPF prog-id=34 op=UNLOAD Dec 16 12:56:24.263000 audit: BPF prog-id=42 op=LOAD Dec 16 12:56:24.263000 audit: BPF prog-id=43 op=LOAD Dec 16 12:56:24.263000 audit: BPF prog-id=35 op=UNLOAD Dec 16 12:56:24.263000 audit: BPF prog-id=36 op=UNLOAD Dec 16 12:56:24.265000 audit: BPF prog-id=44 op=LOAD Dec 16 12:56:24.265000 audit: BPF prog-id=37 op=UNLOAD Dec 16 12:56:24.266000 audit: BPF prog-id=45 op=LOAD Dec 16 12:56:24.266000 audit: BPF prog-id=28 op=UNLOAD Dec 16 12:56:24.266000 audit: BPF prog-id=46 op=LOAD Dec 16 12:56:24.266000 audit: BPF prog-id=47 op=LOAD Dec 16 12:56:24.266000 audit: BPF prog-id=29 op=UNLOAD Dec 16 12:56:24.266000 audit: BPF prog-id=30 op=UNLOAD Dec 16 12:56:24.268000 audit: BPF prog-id=48 op=LOAD Dec 16 12:56:24.268000 audit: BPF prog-id=38 op=UNLOAD Dec 16 12:56:24.268000 audit: BPF prog-id=49 op=LOAD Dec 16 12:56:24.268000 audit: BPF prog-id=50 op=LOAD Dec 16 12:56:24.268000 audit: BPF prog-id=39 op=UNLOAD Dec 16 12:56:24.268000 audit: BPF prog-id=40 op=UNLOAD Dec 16 12:56:24.269000 audit: BPF prog-id=51 op=LOAD Dec 16 12:56:24.269000 audit: BPF prog-id=31 op=UNLOAD Dec 16 12:56:24.269000 audit: BPF prog-id=52 op=LOAD Dec 16 12:56:24.269000 audit: BPF prog-id=53 op=LOAD Dec 16 12:56:24.269000 audit: BPF prog-id=32 op=UNLOAD Dec 16 12:56:24.269000 audit: BPF prog-id=33 op=UNLOAD Dec 16 12:56:24.282587 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:56:24.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:24.298764 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:56:24.304636 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 12:56:24.311705 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 12:56:24.316497 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 12:56:24.318000 audit: BPF prog-id=8 op=UNLOAD Dec 16 12:56:24.318000 audit: BPF prog-id=7 op=UNLOAD Dec 16 12:56:24.319000 audit: BPF prog-id=54 op=LOAD Dec 16 12:56:24.319000 audit: BPF prog-id=55 op=LOAD Dec 16 12:56:24.321749 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:56:24.332867 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 12:56:24.338886 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:56:24.339202 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:56:24.342457 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:56:24.360350 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:56:24.377021 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:56:24.378542 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:56:24.378857 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:56:24.379019 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:56:24.379175 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:56:24.389842 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:56:24.390125 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:56:24.390431 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:56:24.390705 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:56:24.390845 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:56:24.390983 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:56:24.408035 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:56:24.408410 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:56:24.422289 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:56:24.424583 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:56:24.424850 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:56:24.425002 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:56:24.425188 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:56:24.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:24.431544 systemd[1]: Finished ensure-sysext.service. Dec 16 12:56:24.442000 audit: BPF prog-id=56 op=LOAD Dec 16 12:56:24.445035 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 12:56:24.450000 audit[1506]: SYSTEM_BOOT pid=1506 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 12:56:24.458606 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 12:56:24.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:24.463133 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:56:24.464339 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:56:24.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:24.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:24.479061 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:56:24.481190 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:56:24.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:24.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:24.483606 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:56:24.520294 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:56:24.523040 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:56:24.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:24.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:24.524505 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:56:24.531861 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 12:56:24.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:24.538235 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:56:24.540794 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:56:24.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:24.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:24.578000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 12:56:24.578000 audit[1539]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc2e9b4c10 a2=420 a3=0 items=0 ppid=1499 pid=1539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:24.578000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:56:24.578909 augenrules[1539]: No rules Dec 16 12:56:24.579693 systemd-udevd[1504]: Using default interface naming scheme 'v257'. Dec 16 12:56:24.583409 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:56:24.583882 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:56:24.594375 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 12:56:24.596661 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 12:56:24.647361 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 12:56:24.655044 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 12:56:24.679464 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:56:24.687590 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:56:24.819980 systemd-networkd[1550]: lo: Link UP Dec 16 12:56:24.819995 systemd-networkd[1550]: lo: Gained carrier Dec 16 12:56:24.823832 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:56:24.825817 systemd[1]: Reached target network.target - Network. Dec 16 12:56:24.830579 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 12:56:24.834490 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 12:56:24.919351 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 12:56:24.940217 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 12:56:25.003358 systemd-networkd[1550]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:56:25.003865 systemd-networkd[1550]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:56:25.010479 systemd-networkd[1550]: eth0: Link UP Dec 16 12:56:25.011587 systemd-networkd[1550]: eth0: Gained carrier Dec 16 12:56:25.016315 systemd-networkd[1550]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:56:25.055322 systemd-networkd[1550]: eth0: DHCPv4 address 10.244.27.222/30, gateway 10.244.27.221 acquired from 10.244.27.221 Dec 16 12:56:25.058320 systemd-timesyncd[1519]: Network configuration changed, trying to establish connection. Dec 16 12:56:25.096350 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 12:56:25.111359 ldconfig[1501]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 12:56:25.121642 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 12:56:25.126314 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 12:56:25.135247 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Dec 16 12:56:25.143251 kernel: ACPI: button: Power Button [PWRF] Dec 16 12:56:25.153957 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 12:56:25.155447 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 12:56:25.157212 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:56:25.159687 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 12:56:25.161347 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 12:56:25.162156 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 12:56:25.163118 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 12:56:25.165321 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 12:56:25.166374 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 12:56:25.167653 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 12:56:25.168607 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 12:56:25.169690 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 12:56:25.169744 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:56:25.170637 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:56:25.172915 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 12:56:25.175632 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 12:56:25.181049 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 12:56:25.182916 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 12:56:25.184062 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 12:56:25.193569 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 12:56:25.195094 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 12:56:25.199270 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 12:56:25.200790 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 12:56:25.203048 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:56:25.210187 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:56:25.210965 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:56:25.211026 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:56:25.216829 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 12:56:25.219992 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 12:56:25.223946 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 12:56:25.230012 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 12:56:25.238757 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 12:56:25.248980 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 12:56:25.249755 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 12:56:25.253091 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 12:56:25.256204 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 12:56:25.261739 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 12:56:25.268736 jq[1587]: false Dec 16 12:56:25.269532 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 12:56:25.273433 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 12:56:25.275828 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 12:56:25.290450 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 12:56:25.291305 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 12:56:25.292161 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 12:56:25.295803 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 12:56:25.304596 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 12:56:25.309872 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 12:56:25.320780 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 12:56:25.322995 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 12:56:25.324478 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 12:56:25.325104 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 12:56:25.325537 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 12:56:25.345294 jq[1599]: true Dec 16 12:56:25.350686 google_oslogin_nss_cache[1590]: oslogin_cache_refresh[1590]: Refreshing passwd entry cache Dec 16 12:56:25.350703 oslogin_cache_refresh[1590]: Refreshing passwd entry cache Dec 16 12:56:25.396347 oslogin_cache_refresh[1590]: Failure getting users, quitting Dec 16 12:56:25.397431 google_oslogin_nss_cache[1590]: oslogin_cache_refresh[1590]: Failure getting users, quitting Dec 16 12:56:25.397431 google_oslogin_nss_cache[1590]: oslogin_cache_refresh[1590]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 12:56:25.397431 google_oslogin_nss_cache[1590]: oslogin_cache_refresh[1590]: Refreshing group entry cache Dec 16 12:56:25.396374 oslogin_cache_refresh[1590]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 12:56:25.396461 oslogin_cache_refresh[1590]: Refreshing group entry cache Dec 16 12:56:25.403494 google_oslogin_nss_cache[1590]: oslogin_cache_refresh[1590]: Failure getting groups, quitting Dec 16 12:56:25.403494 google_oslogin_nss_cache[1590]: oslogin_cache_refresh[1590]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 12:56:25.402473 oslogin_cache_refresh[1590]: Failure getting groups, quitting Dec 16 12:56:25.402492 oslogin_cache_refresh[1590]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 12:56:25.419978 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 12:56:25.420535 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 12:56:25.431275 extend-filesystems[1589]: Found /dev/vda6 Dec 16 12:56:25.438433 update_engine[1598]: I20251216 12:56:25.423159 1598 main.cc:92] Flatcar Update Engine starting Dec 16 12:56:25.441796 extend-filesystems[1589]: Found /dev/vda9 Dec 16 12:56:25.452579 tar[1601]: linux-amd64/LICENSE Dec 16 12:56:25.452579 tar[1601]: linux-amd64/helm Dec 16 12:56:25.464258 jq[1610]: true Dec 16 12:56:25.464208 dbus-daemon[1585]: [system] SELinux support is enabled Dec 16 12:56:25.468255 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 12:56:25.473623 extend-filesystems[1589]: Checking size of /dev/vda9 Dec 16 12:56:25.475409 dbus-daemon[1585]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1550 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 16 12:56:25.477258 update_engine[1598]: I20251216 12:56:25.476212 1598 update_check_scheduler.cc:74] Next update check in 8m36s Dec 16 12:56:25.475627 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 12:56:25.476046 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 12:56:25.479328 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 12:56:25.479378 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 12:56:25.481355 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 12:56:25.481390 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 12:56:25.481930 dbus-daemon[1585]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 12:56:25.482717 systemd[1]: Started update-engine.service - Update Engine. Dec 16 12:56:25.493575 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 16 12:56:25.518407 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 12:56:25.536366 systemd-logind[1597]: New seat seat0. Dec 16 12:56:25.539773 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 12:56:25.543819 extend-filesystems[1589]: Resized partition /dev/vda9 Dec 16 12:56:25.551640 extend-filesystems[1645]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 12:56:25.557742 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 14138363 blocks Dec 16 12:56:25.590011 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 12:56:25.639249 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 16 12:56:25.646271 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 16 12:56:25.759842 bash[1660]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:56:25.764351 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 12:56:25.768537 systemd[1]: Starting sshkeys.service... Dec 16 12:56:25.845291 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Dec 16 12:56:25.841044 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 12:56:25.855336 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 12:56:25.869031 extend-filesystems[1645]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 12:56:25.869031 extend-filesystems[1645]: old_desc_blocks = 1, new_desc_blocks = 7 Dec 16 12:56:25.869031 extend-filesystems[1645]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Dec 16 12:56:25.882171 extend-filesystems[1589]: Resized filesystem in /dev/vda9 Dec 16 12:56:25.871832 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 12:56:25.873840 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 12:56:25.891457 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 16 12:56:25.896542 dbus-daemon[1585]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 16 12:56:25.908963 dbus-daemon[1585]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1640 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 16 12:56:25.917636 systemd[1]: Starting polkit.service - Authorization Manager... Dec 16 12:56:25.933257 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 12:56:26.134311 containerd[1627]: time="2025-12-16T12:56:26Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 12:56:26.146251 containerd[1627]: time="2025-12-16T12:56:26.144456434Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 12:56:26.193525 containerd[1627]: time="2025-12-16T12:56:26.192748447Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="27.427µs" Dec 16 12:56:26.193525 containerd[1627]: time="2025-12-16T12:56:26.192809305Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 12:56:26.193525 containerd[1627]: time="2025-12-16T12:56:26.192878185Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 12:56:26.193525 containerd[1627]: time="2025-12-16T12:56:26.192914240Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 12:56:26.193525 containerd[1627]: time="2025-12-16T12:56:26.193189178Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 12:56:26.193525 containerd[1627]: time="2025-12-16T12:56:26.193216160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:56:26.193525 containerd[1627]: time="2025-12-16T12:56:26.193339573Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:56:26.193525 containerd[1627]: time="2025-12-16T12:56:26.193360296Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:56:26.200604 containerd[1627]: time="2025-12-16T12:56:26.199712818Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:56:26.200604 containerd[1627]: time="2025-12-16T12:56:26.199767251Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:56:26.200604 containerd[1627]: time="2025-12-16T12:56:26.199790107Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:56:26.200604 containerd[1627]: time="2025-12-16T12:56:26.199805621Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 12:56:26.200604 containerd[1627]: time="2025-12-16T12:56:26.200206806Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 12:56:26.200604 containerd[1627]: time="2025-12-16T12:56:26.200274519Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 12:56:26.200604 containerd[1627]: time="2025-12-16T12:56:26.200479096Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 12:56:26.202477 containerd[1627]: time="2025-12-16T12:56:26.202445908Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:56:26.209600 containerd[1627]: time="2025-12-16T12:56:26.205294208Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:56:26.209600 containerd[1627]: time="2025-12-16T12:56:26.205386643Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 12:56:26.209600 containerd[1627]: time="2025-12-16T12:56:26.205464552Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 12:56:26.209600 containerd[1627]: time="2025-12-16T12:56:26.205949498Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 12:56:26.209600 containerd[1627]: time="2025-12-16T12:56:26.207269910Z" level=info msg="metadata content store policy set" policy=shared Dec 16 12:56:26.222252 containerd[1627]: time="2025-12-16T12:56:26.220663059Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 12:56:26.222252 containerd[1627]: time="2025-12-16T12:56:26.220772748Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 12:56:26.222252 containerd[1627]: time="2025-12-16T12:56:26.220893343Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 12:56:26.222252 containerd[1627]: time="2025-12-16T12:56:26.220918600Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 12:56:26.222252 containerd[1627]: time="2025-12-16T12:56:26.220940237Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 12:56:26.222252 containerd[1627]: time="2025-12-16T12:56:26.220959631Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 12:56:26.222252 containerd[1627]: time="2025-12-16T12:56:26.220997164Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 12:56:26.222252 containerd[1627]: time="2025-12-16T12:56:26.221018496Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 12:56:26.222252 containerd[1627]: time="2025-12-16T12:56:26.221037240Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 12:56:26.222252 containerd[1627]: time="2025-12-16T12:56:26.221057648Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 12:56:26.222252 containerd[1627]: time="2025-12-16T12:56:26.221075854Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 12:56:26.222252 containerd[1627]: time="2025-12-16T12:56:26.221095520Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 12:56:26.222252 containerd[1627]: time="2025-12-16T12:56:26.221113976Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 12:56:26.222252 containerd[1627]: time="2025-12-16T12:56:26.221135036Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 12:56:26.222817 containerd[1627]: time="2025-12-16T12:56:26.221348924Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 12:56:26.222817 containerd[1627]: time="2025-12-16T12:56:26.221400301Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 12:56:26.222817 containerd[1627]: time="2025-12-16T12:56:26.221425996Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 12:56:26.222817 containerd[1627]: time="2025-12-16T12:56:26.221451059Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 12:56:26.222817 containerd[1627]: time="2025-12-16T12:56:26.221470283Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 12:56:26.222817 containerd[1627]: time="2025-12-16T12:56:26.221514343Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 12:56:26.222817 containerd[1627]: time="2025-12-16T12:56:26.221538631Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 12:56:26.222817 containerd[1627]: time="2025-12-16T12:56:26.221557819Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 12:56:26.222817 containerd[1627]: time="2025-12-16T12:56:26.221583296Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 12:56:26.222817 containerd[1627]: time="2025-12-16T12:56:26.221620588Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 12:56:26.222817 containerd[1627]: time="2025-12-16T12:56:26.221642165Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 12:56:26.222817 containerd[1627]: time="2025-12-16T12:56:26.221683939Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 12:56:26.222817 containerd[1627]: time="2025-12-16T12:56:26.221769673Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 12:56:26.222817 containerd[1627]: time="2025-12-16T12:56:26.221795956Z" level=info msg="Start snapshots syncer" Dec 16 12:56:26.222817 containerd[1627]: time="2025-12-16T12:56:26.221841745Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 12:56:26.228266 containerd[1627]: time="2025-12-16T12:56:26.227759960Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 12:56:26.228266 containerd[1627]: time="2025-12-16T12:56:26.227884602Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 12:56:26.228627 containerd[1627]: time="2025-12-16T12:56:26.228047950Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 12:56:26.230599 containerd[1627]: time="2025-12-16T12:56:26.230013630Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 12:56:26.230783 containerd[1627]: time="2025-12-16T12:56:26.230732929Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 12:56:26.233312 containerd[1627]: time="2025-12-16T12:56:26.232265882Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 12:56:26.233312 containerd[1627]: time="2025-12-16T12:56:26.232301310Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 12:56:26.233312 containerd[1627]: time="2025-12-16T12:56:26.232357315Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 12:56:26.233312 containerd[1627]: time="2025-12-16T12:56:26.232385207Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 12:56:26.233312 containerd[1627]: time="2025-12-16T12:56:26.232404105Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 12:56:26.233312 containerd[1627]: time="2025-12-16T12:56:26.232445518Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 12:56:26.233312 containerd[1627]: time="2025-12-16T12:56:26.232464435Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 12:56:26.233329 locksmithd[1642]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 12:56:26.237250 containerd[1627]: time="2025-12-16T12:56:26.233999750Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:56:26.237250 containerd[1627]: time="2025-12-16T12:56:26.234039671Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:56:26.237250 containerd[1627]: time="2025-12-16T12:56:26.234078706Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:56:26.237250 containerd[1627]: time="2025-12-16T12:56:26.234098626Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:56:26.237250 containerd[1627]: time="2025-12-16T12:56:26.234113041Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 12:56:26.237250 containerd[1627]: time="2025-12-16T12:56:26.234131826Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 12:56:26.237250 containerd[1627]: time="2025-12-16T12:56:26.235908990Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 12:56:26.238062 containerd[1627]: time="2025-12-16T12:56:26.237574999Z" level=info msg="runtime interface created" Dec 16 12:56:26.238062 containerd[1627]: time="2025-12-16T12:56:26.237601040Z" level=info msg="created NRI interface" Dec 16 12:56:26.238062 containerd[1627]: time="2025-12-16T12:56:26.237640122Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 12:56:26.238062 containerd[1627]: time="2025-12-16T12:56:26.237673668Z" level=info msg="Connect containerd service" Dec 16 12:56:26.238062 containerd[1627]: time="2025-12-16T12:56:26.237739954Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 12:56:26.245057 containerd[1627]: time="2025-12-16T12:56:26.244948971Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:56:26.279520 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:56:26.372304 systemd-logind[1597]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 12:56:26.463719 containerd[1627]: time="2025-12-16T12:56:26.463290805Z" level=info msg="Start subscribing containerd event" Dec 16 12:56:26.463719 containerd[1627]: time="2025-12-16T12:56:26.463459159Z" level=info msg="Start recovering state" Dec 16 12:56:26.463719 containerd[1627]: time="2025-12-16T12:56:26.463658890Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 12:56:26.463917 containerd[1627]: time="2025-12-16T12:56:26.463763554Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 12:56:26.464423 containerd[1627]: time="2025-12-16T12:56:26.464211966Z" level=info msg="Start event monitor" Dec 16 12:56:26.464603 containerd[1627]: time="2025-12-16T12:56:26.464576802Z" level=info msg="Start cni network conf syncer for default" Dec 16 12:56:26.464722 containerd[1627]: time="2025-12-16T12:56:26.464698326Z" level=info msg="Start streaming server" Dec 16 12:56:26.465403 containerd[1627]: time="2025-12-16T12:56:26.465376396Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 12:56:26.465675 containerd[1627]: time="2025-12-16T12:56:26.465648049Z" level=info msg="runtime interface starting up..." Dec 16 12:56:26.467513 containerd[1627]: time="2025-12-16T12:56:26.466204209Z" level=info msg="starting plugins..." Dec 16 12:56:26.467513 containerd[1627]: time="2025-12-16T12:56:26.467301325Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 12:56:26.469804 systemd-logind[1597]: Watching system buttons on /dev/input/event3 (Power Button) Dec 16 12:56:26.474117 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 12:56:26.479405 containerd[1627]: time="2025-12-16T12:56:26.478284374Z" level=info msg="containerd successfully booted in 0.345016s" Dec 16 12:56:26.650444 polkitd[1673]: Started polkitd version 126 Dec 16 12:56:26.669114 polkitd[1673]: Loading rules from directory /etc/polkit-1/rules.d Dec 16 12:56:26.672595 polkitd[1673]: Loading rules from directory /run/polkit-1/rules.d Dec 16 12:56:26.672683 polkitd[1673]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 12:56:26.673037 polkitd[1673]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 16 12:56:26.673084 polkitd[1673]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 12:56:26.673144 polkitd[1673]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 16 12:56:26.678391 polkitd[1673]: Finished loading, compiling and executing 2 rules Dec 16 12:56:26.679278 systemd[1]: Started polkit.service - Authorization Manager. Dec 16 12:56:26.679509 dbus-daemon[1585]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 16 12:56:26.681613 polkitd[1673]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 16 12:56:26.719699 systemd-hostnamed[1640]: Hostname set to (static) Dec 16 12:56:26.929434 systemd-networkd[1550]: eth0: Gained IPv6LL Dec 16 12:56:26.933535 systemd-timesyncd[1519]: Network configuration changed, trying to establish connection. Dec 16 12:56:26.939889 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 12:56:26.953486 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:56:26.962659 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 12:56:26.970700 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:56:26.977000 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 12:56:27.091622 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 12:56:27.098172 sshd_keygen[1634]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 12:56:27.140354 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 12:56:27.144015 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 12:56:27.146807 systemd[1]: Started sshd@0-10.244.27.222:22-139.178.68.195:41584.service - OpenSSH per-connection server daemon (139.178.68.195:41584). Dec 16 12:56:27.206890 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 12:56:27.207343 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 12:56:27.213466 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 12:56:27.215692 tar[1601]: linux-amd64/README.md Dec 16 12:56:27.234392 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 12:56:27.266319 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 12:56:27.271647 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 12:56:27.279619 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 12:56:27.281850 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 12:56:28.069787 sshd[1740]: Accepted publickey for core from 139.178.68.195 port 41584 ssh2: RSA SHA256:nzW4nX+OoWczWkGdWpN6K+WV2VgcBQaSLng08cWkYS4 Dec 16 12:56:28.074286 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:56:28.096130 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 12:56:28.105450 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 12:56:28.114727 systemd-logind[1597]: New session 1 of user core. Dec 16 12:56:28.147866 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 12:56:28.153227 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 12:56:28.175916 (systemd)[1756]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:56:28.181429 systemd-logind[1597]: New session 2 of user core. Dec 16 12:56:28.208105 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 12:56:28.208282 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 12:56:28.310289 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:56:28.321756 (kubelet)[1770]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:56:28.374845 systemd[1756]: Queued start job for default target default.target. Dec 16 12:56:28.381658 systemd[1756]: Created slice app.slice - User Application Slice. Dec 16 12:56:28.381715 systemd[1756]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 12:56:28.381739 systemd[1756]: Reached target paths.target - Paths. Dec 16 12:56:28.381820 systemd[1756]: Reached target timers.target - Timers. Dec 16 12:56:28.385362 systemd[1756]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 12:56:28.386606 systemd[1756]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 12:56:28.410785 systemd[1756]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 12:56:28.412151 systemd[1756]: Reached target sockets.target - Sockets. Dec 16 12:56:28.418786 systemd[1756]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 12:56:28.418958 systemd[1756]: Reached target basic.target - Basic System. Dec 16 12:56:28.419078 systemd[1756]: Reached target default.target - Main User Target. Dec 16 12:56:28.419160 systemd[1756]: Startup finished in 228ms. Dec 16 12:56:28.419358 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 12:56:28.429974 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 12:56:28.442837 systemd-timesyncd[1519]: Network configuration changed, trying to establish connection. Dec 16 12:56:28.449660 systemd-networkd[1550]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:6f7:24:19ff:fef4:1bde/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:6f7:24:19ff:fef4:1bde/64 assigned by NDisc. Dec 16 12:56:28.449677 systemd-networkd[1550]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 16 12:56:28.915137 systemd[1]: Started sshd@1-10.244.27.222:22-139.178.68.195:41594.service - OpenSSH per-connection server daemon (139.178.68.195:41594). Dec 16 12:56:29.036177 kubelet[1770]: E1216 12:56:29.036084 1770 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:56:29.038999 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:56:29.039322 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:56:29.040255 systemd[1]: kubelet.service: Consumed 1.132s CPU time, 267.8M memory peak. Dec 16 12:56:29.751675 sshd[1784]: Accepted publickey for core from 139.178.68.195 port 41594 ssh2: RSA SHA256:nzW4nX+OoWczWkGdWpN6K+WV2VgcBQaSLng08cWkYS4 Dec 16 12:56:29.753904 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:56:29.764065 systemd-logind[1597]: New session 3 of user core. Dec 16 12:56:29.771612 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 12:56:30.194383 systemd-timesyncd[1519]: Network configuration changed, trying to establish connection. Dec 16 12:56:30.230272 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 12:56:30.232269 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 12:56:30.238021 sshd[1789]: Connection closed by 139.178.68.195 port 41594 Dec 16 12:56:30.237828 sshd-session[1784]: pam_unix(sshd:session): session closed for user core Dec 16 12:56:30.249670 systemd[1]: sshd@1-10.244.27.222:22-139.178.68.195:41594.service: Deactivated successfully. Dec 16 12:56:30.253594 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 12:56:30.256886 systemd-logind[1597]: Session 3 logged out. Waiting for processes to exit. Dec 16 12:56:30.258941 systemd-logind[1597]: Removed session 3. Dec 16 12:56:30.414744 systemd[1]: Started sshd@2-10.244.27.222:22-139.178.68.195:60116.service - OpenSSH per-connection server daemon (139.178.68.195:60116). Dec 16 12:56:31.277828 sshd[1797]: Accepted publickey for core from 139.178.68.195 port 60116 ssh2: RSA SHA256:nzW4nX+OoWczWkGdWpN6K+WV2VgcBQaSLng08cWkYS4 Dec 16 12:56:31.278849 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:56:31.286805 systemd-logind[1597]: New session 4 of user core. Dec 16 12:56:31.295678 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 12:56:31.767364 sshd[1801]: Connection closed by 139.178.68.195 port 60116 Dec 16 12:56:31.766517 sshd-session[1797]: pam_unix(sshd:session): session closed for user core Dec 16 12:56:31.772771 systemd[1]: sshd@2-10.244.27.222:22-139.178.68.195:60116.service: Deactivated successfully. Dec 16 12:56:31.775570 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 12:56:31.778483 systemd-logind[1597]: Session 4 logged out. Waiting for processes to exit. Dec 16 12:56:31.780611 systemd-logind[1597]: Removed session 4. Dec 16 12:56:32.378017 login[1751]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:56:32.386105 systemd-logind[1597]: New session 5 of user core. Dec 16 12:56:32.402787 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 12:56:32.696962 login[1752]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:56:32.704907 systemd-logind[1597]: New session 6 of user core. Dec 16 12:56:32.711639 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 12:56:34.249265 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 12:56:34.254306 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 12:56:34.262249 coreos-metadata[1667]: Dec 16 12:56:34.261 WARN failed to locate config-drive, using the metadata service API instead Dec 16 12:56:34.266413 coreos-metadata[1584]: Dec 16 12:56:34.266 WARN failed to locate config-drive, using the metadata service API instead Dec 16 12:56:34.295817 coreos-metadata[1667]: Dec 16 12:56:34.295 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 16 12:56:34.297061 coreos-metadata[1584]: Dec 16 12:56:34.296 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Dec 16 12:56:34.304075 coreos-metadata[1584]: Dec 16 12:56:34.303 INFO Fetch failed with 404: resource not found Dec 16 12:56:34.304342 coreos-metadata[1584]: Dec 16 12:56:34.304 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 16 12:56:34.304723 coreos-metadata[1584]: Dec 16 12:56:34.304 INFO Fetch successful Dec 16 12:56:34.304723 coreos-metadata[1584]: Dec 16 12:56:34.304 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 16 12:56:34.323540 coreos-metadata[1584]: Dec 16 12:56:34.323 INFO Fetch successful Dec 16 12:56:34.323800 coreos-metadata[1584]: Dec 16 12:56:34.323 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 16 12:56:34.332873 coreos-metadata[1667]: Dec 16 12:56:34.332 INFO Fetch successful Dec 16 12:56:34.333161 coreos-metadata[1667]: Dec 16 12:56:34.333 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 16 12:56:34.337489 coreos-metadata[1584]: Dec 16 12:56:34.337 INFO Fetch successful Dec 16 12:56:34.337489 coreos-metadata[1584]: Dec 16 12:56:34.337 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 16 12:56:34.355940 coreos-metadata[1584]: Dec 16 12:56:34.355 INFO Fetch successful Dec 16 12:56:34.355940 coreos-metadata[1584]: Dec 16 12:56:34.355 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 16 12:56:34.363787 coreos-metadata[1667]: Dec 16 12:56:34.363 INFO Fetch successful Dec 16 12:56:34.366063 unknown[1667]: wrote ssh authorized keys file for user: core Dec 16 12:56:34.377081 coreos-metadata[1584]: Dec 16 12:56:34.376 INFO Fetch successful Dec 16 12:56:34.390779 update-ssh-keys[1837]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:56:34.396723 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 12:56:34.402320 systemd[1]: Finished sshkeys.service. Dec 16 12:56:34.421965 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 12:56:34.422953 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 12:56:34.423187 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 12:56:34.423455 systemd[1]: Startup finished in 3.554s (kernel) + 15.372s (initrd) + 13.236s (userspace) = 32.164s. Dec 16 12:56:39.290035 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 12:56:39.293134 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:56:39.539553 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:56:39.551751 (kubelet)[1854]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:56:39.618257 kubelet[1854]: E1216 12:56:39.618149 1854 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:56:39.623114 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:56:39.623574 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:56:39.624385 systemd[1]: kubelet.service: Consumed 244ms CPU time, 108.5M memory peak. Dec 16 12:56:41.917463 systemd[1]: Started sshd@3-10.244.27.222:22-139.178.68.195:55610.service - OpenSSH per-connection server daemon (139.178.68.195:55610). Dec 16 12:56:42.732651 sshd[1862]: Accepted publickey for core from 139.178.68.195 port 55610 ssh2: RSA SHA256:nzW4nX+OoWczWkGdWpN6K+WV2VgcBQaSLng08cWkYS4 Dec 16 12:56:42.735053 sshd-session[1862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:56:42.742380 systemd-logind[1597]: New session 7 of user core. Dec 16 12:56:42.747466 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 12:56:43.190412 sshd[1866]: Connection closed by 139.178.68.195 port 55610 Dec 16 12:56:43.191188 sshd-session[1862]: pam_unix(sshd:session): session closed for user core Dec 16 12:56:43.196394 systemd[1]: sshd@3-10.244.27.222:22-139.178.68.195:55610.service: Deactivated successfully. Dec 16 12:56:43.199149 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 12:56:43.201353 systemd-logind[1597]: Session 7 logged out. Waiting for processes to exit. Dec 16 12:56:43.203526 systemd-logind[1597]: Removed session 7. Dec 16 12:56:43.378701 systemd[1]: Started sshd@4-10.244.27.222:22-139.178.68.195:55612.service - OpenSSH per-connection server daemon (139.178.68.195:55612). Dec 16 12:56:44.257532 sshd[1872]: Accepted publickey for core from 139.178.68.195 port 55612 ssh2: RSA SHA256:nzW4nX+OoWczWkGdWpN6K+WV2VgcBQaSLng08cWkYS4 Dec 16 12:56:44.259382 sshd-session[1872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:56:44.268094 systemd-logind[1597]: New session 8 of user core. Dec 16 12:56:44.274490 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 12:56:44.746282 sshd[1876]: Connection closed by 139.178.68.195 port 55612 Dec 16 12:56:44.745561 sshd-session[1872]: pam_unix(sshd:session): session closed for user core Dec 16 12:56:44.752384 systemd[1]: sshd@4-10.244.27.222:22-139.178.68.195:55612.service: Deactivated successfully. Dec 16 12:56:44.752941 systemd-logind[1597]: Session 8 logged out. Waiting for processes to exit. Dec 16 12:56:44.754843 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 12:56:44.758009 systemd-logind[1597]: Removed session 8. Dec 16 12:56:44.916258 systemd[1]: Started sshd@5-10.244.27.222:22-139.178.68.195:55624.service - OpenSSH per-connection server daemon (139.178.68.195:55624). Dec 16 12:56:45.773915 sshd[1882]: Accepted publickey for core from 139.178.68.195 port 55624 ssh2: RSA SHA256:nzW4nX+OoWczWkGdWpN6K+WV2VgcBQaSLng08cWkYS4 Dec 16 12:56:45.775938 sshd-session[1882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:56:45.785008 systemd-logind[1597]: New session 9 of user core. Dec 16 12:56:45.800581 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 12:56:46.257253 sshd[1886]: Connection closed by 139.178.68.195 port 55624 Dec 16 12:56:46.257996 sshd-session[1882]: pam_unix(sshd:session): session closed for user core Dec 16 12:56:46.264012 systemd[1]: sshd@5-10.244.27.222:22-139.178.68.195:55624.service: Deactivated successfully. Dec 16 12:56:46.266843 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 12:56:46.268301 systemd-logind[1597]: Session 9 logged out. Waiting for processes to exit. Dec 16 12:56:46.270939 systemd-logind[1597]: Removed session 9. Dec 16 12:56:46.435072 systemd[1]: Started sshd@6-10.244.27.222:22-139.178.68.195:55626.service - OpenSSH per-connection server daemon (139.178.68.195:55626). Dec 16 12:56:47.304811 sshd[1892]: Accepted publickey for core from 139.178.68.195 port 55626 ssh2: RSA SHA256:nzW4nX+OoWczWkGdWpN6K+WV2VgcBQaSLng08cWkYS4 Dec 16 12:56:47.306688 sshd-session[1892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:56:47.313754 systemd-logind[1597]: New session 10 of user core. Dec 16 12:56:47.325657 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 12:56:47.654273 sudo[1897]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 12:56:47.654847 sudo[1897]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:56:47.664947 sudo[1897]: pam_unix(sudo:session): session closed for user root Dec 16 12:56:47.826797 sshd[1896]: Connection closed by 139.178.68.195 port 55626 Dec 16 12:56:47.827917 sshd-session[1892]: pam_unix(sshd:session): session closed for user core Dec 16 12:56:47.834302 systemd[1]: sshd@6-10.244.27.222:22-139.178.68.195:55626.service: Deactivated successfully. Dec 16 12:56:47.836906 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 12:56:47.838973 systemd-logind[1597]: Session 10 logged out. Waiting for processes to exit. Dec 16 12:56:47.841257 systemd-logind[1597]: Removed session 10. Dec 16 12:56:48.003466 systemd[1]: Started sshd@7-10.244.27.222:22-139.178.68.195:55636.service - OpenSSH per-connection server daemon (139.178.68.195:55636). Dec 16 12:56:48.860707 sshd[1904]: Accepted publickey for core from 139.178.68.195 port 55636 ssh2: RSA SHA256:nzW4nX+OoWczWkGdWpN6K+WV2VgcBQaSLng08cWkYS4 Dec 16 12:56:48.862655 sshd-session[1904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:56:48.870298 systemd-logind[1597]: New session 11 of user core. Dec 16 12:56:48.879510 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 12:56:49.192272 sudo[1910]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 12:56:49.192776 sudo[1910]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:56:49.196148 sudo[1910]: pam_unix(sudo:session): session closed for user root Dec 16 12:56:49.206134 sudo[1909]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 12:56:49.207325 sudo[1909]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:56:49.218043 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:56:49.268000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 12:56:49.273532 kernel: kauditd_printk_skb: 127 callbacks suppressed Dec 16 12:56:49.273652 kernel: audit: type=1305 audit(1765889809.268:224): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 12:56:49.268000 audit[1934]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff43d56ac0 a2=420 a3=0 items=0 ppid=1915 pid=1934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:49.274979 augenrules[1934]: No rules Dec 16 12:56:49.276620 kernel: audit: type=1300 audit(1765889809.268:224): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff43d56ac0 a2=420 a3=0 items=0 ppid=1915 pid=1934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:49.282242 kernel: audit: type=1327 audit(1765889809.268:224): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:56:49.268000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:56:49.281262 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:56:49.282762 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:56:49.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:49.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:49.288329 sudo[1909]: pam_unix(sudo:session): session closed for user root Dec 16 12:56:49.290648 kernel: audit: type=1130 audit(1765889809.284:225): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:49.290737 kernel: audit: type=1131 audit(1765889809.284:226): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:49.286000 audit[1909]: USER_END pid=1909 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:56:49.294737 kernel: audit: type=1106 audit(1765889809.286:227): pid=1909 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:56:49.286000 audit[1909]: CRED_DISP pid=1909 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:56:49.298950 kernel: audit: type=1104 audit(1765889809.286:228): pid=1909 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:56:49.451005 sshd[1908]: Connection closed by 139.178.68.195 port 55636 Dec 16 12:56:49.449704 sshd-session[1904]: pam_unix(sshd:session): session closed for user core Dec 16 12:56:49.452000 audit[1904]: USER_END pid=1904 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:56:49.453000 audit[1904]: CRED_DISP pid=1904 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:56:49.461627 kernel: audit: type=1106 audit(1765889809.452:229): pid=1904 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:56:49.461734 kernel: audit: type=1104 audit(1765889809.453:230): pid=1904 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:56:49.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.244.27.222:22-139.178.68.195:55636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:49.462448 systemd-logind[1597]: Session 11 logged out. Waiting for processes to exit. Dec 16 12:56:49.463658 systemd[1]: sshd@7-10.244.27.222:22-139.178.68.195:55636.service: Deactivated successfully. Dec 16 12:56:49.466166 kernel: audit: type=1131 audit(1765889809.463:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.244.27.222:22-139.178.68.195:55636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:49.468515 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 12:56:49.472501 systemd-logind[1597]: Removed session 11. Dec 16 12:56:49.621835 systemd[1]: Started sshd@8-10.244.27.222:22-139.178.68.195:55652.service - OpenSSH per-connection server daemon (139.178.68.195:55652). Dec 16 12:56:49.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.244.27.222:22-139.178.68.195:55652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:49.626115 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 12:56:49.629453 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:56:49.818370 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:56:49.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:49.831692 (kubelet)[1954]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:56:49.906565 kubelet[1954]: E1216 12:56:49.906482 1954 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:56:49.910458 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:56:49.910701 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:56:49.911418 systemd[1]: kubelet.service: Consumed 219ms CPU time, 110M memory peak. Dec 16 12:56:49.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:56:50.470000 audit[1943]: USER_ACCT pid=1943 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:56:50.472181 sshd[1943]: Accepted publickey for core from 139.178.68.195 port 55652 ssh2: RSA SHA256:nzW4nX+OoWczWkGdWpN6K+WV2VgcBQaSLng08cWkYS4 Dec 16 12:56:50.472000 audit[1943]: CRED_ACQ pid=1943 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:56:50.472000 audit[1943]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc5e2caa80 a2=3 a3=0 items=0 ppid=1 pid=1943 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:50.472000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:56:50.473347 sshd-session[1943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:56:50.481299 systemd-logind[1597]: New session 12 of user core. Dec 16 12:56:50.491629 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 12:56:50.497000 audit[1943]: USER_START pid=1943 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:56:50.499000 audit[1962]: CRED_ACQ pid=1962 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:56:50.797000 audit[1963]: USER_ACCT pid=1963 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:56:50.797000 audit[1963]: CRED_REFR pid=1963 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:56:50.797934 sudo[1963]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 12:56:50.798495 sudo[1963]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:56:50.798000 audit[1963]: USER_START pid=1963 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:56:51.332173 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 12:56:51.355753 (dockerd)[1983]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 12:56:51.753305 dockerd[1983]: time="2025-12-16T12:56:51.753168233Z" level=info msg="Starting up" Dec 16 12:56:51.755263 dockerd[1983]: time="2025-12-16T12:56:51.755202151Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 12:56:51.772555 dockerd[1983]: time="2025-12-16T12:56:51.772423839Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 12:56:51.802280 systemd[1]: var-lib-docker-metacopy\x2dcheck1297001137-merged.mount: Deactivated successfully. Dec 16 12:56:51.839532 dockerd[1983]: time="2025-12-16T12:56:51.839298897Z" level=info msg="Loading containers: start." Dec 16 12:56:51.855268 kernel: Initializing XFRM netlink socket Dec 16 12:56:51.945000 audit[2035]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:56:51.945000 audit[2035]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff59da7070 a2=0 a3=0 items=0 ppid=1983 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:51.945000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 12:56:51.948000 audit[2037]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:56:51.948000 audit[2037]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fffb6dee050 a2=0 a3=0 items=0 ppid=1983 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:51.948000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 12:56:51.952000 audit[2039]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:56:51.952000 audit[2039]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe368c8370 a2=0 a3=0 items=0 ppid=1983 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:51.952000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 12:56:51.955000 audit[2041]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:56:51.955000 audit[2041]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd3306170 a2=0 a3=0 items=0 ppid=1983 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:51.955000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 12:56:51.958000 audit[2043]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:56:51.958000 audit[2043]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdfdd08bf0 a2=0 a3=0 items=0 ppid=1983 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:51.958000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 12:56:51.961000 audit[2045]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:56:51.961000 audit[2045]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe9357bdc0 a2=0 a3=0 items=0 ppid=1983 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:51.961000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:56:51.965000 audit[2047]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:56:51.965000 audit[2047]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdefa26dd0 a2=0 a3=0 items=0 ppid=1983 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:51.965000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:56:51.968000 audit[2049]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:56:51.968000 audit[2049]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffdcaeef660 a2=0 a3=0 items=0 ppid=1983 pid=2049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:51.968000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 12:56:52.016000 audit[2052]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2052 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:56:52.016000 audit[2052]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7fff95535a50 a2=0 a3=0 items=0 ppid=1983 pid=2052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:52.016000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 12:56:52.019000 audit[2054]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:56:52.019000 audit[2054]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff38513d80 a2=0 a3=0 items=0 ppid=1983 pid=2054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:52.019000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 12:56:52.023000 audit[2056]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:56:52.023000 audit[2056]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffd05882fa0 a2=0 a3=0 items=0 ppid=1983 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:52.023000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 12:56:52.026000 audit[2058]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:56:52.026000 audit[2058]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd663ce7d0 a2=0 a3=0 items=0 ppid=1983 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:52.026000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:56:52.029000 audit[2060]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:56:52.029000 audit[2060]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffdc9c6db60 a2=0 a3=0 items=0 ppid=1983 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:52.029000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 12:56:52.087000 audit[2090]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2090 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:56:52.087000 audit[2090]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffdd41748c0 a2=0 a3=0 items=0 ppid=1983 pid=2090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:52.087000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 12:56:52.090000 audit[2092]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2092 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:56:52.090000 audit[2092]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff19b7f9c0 a2=0 a3=0 items=0 ppid=1983 pid=2092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:52.090000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 12:56:52.094000 audit[2094]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2094 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:56:52.094000 audit[2094]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd7ce725b0 a2=0 a3=0 items=0 ppid=1983 pid=2094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:52.094000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 12:56:52.098000 audit[2096]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2096 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:56:52.098000 audit[2096]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffef0a4780 a2=0 a3=0 items=0 ppid=1983 pid=2096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:52.098000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 12:56:52.101000 audit[2098]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2098 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:56:52.101000 audit[2098]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdd7fddbc0 a2=0 a3=0 items=0 ppid=1983 pid=2098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:52.101000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 12:56:52.105000 audit[2100]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2100 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:56:52.105000 audit[2100]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe6feadc60 a2=0 a3=0 items=0 ppid=1983 pid=2100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:52.105000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:56:52.109000 audit[2102]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2102 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:56:52.109000 audit[2102]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff8f483e30 a2=0 a3=0 items=0 ppid=1983 pid=2102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:52.109000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:56:52.112000 audit[2104]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2104 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:56:52.112000 audit[2104]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffdeaa08120 a2=0 a3=0 items=0 ppid=1983 pid=2104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:52.112000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 12:56:52.117000 audit[2106]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2106 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:56:52.117000 audit[2106]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffea02de350 a2=0 a3=0 items=0 ppid=1983 pid=2106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:52.117000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 12:56:52.120000 audit[2108]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2108 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:56:52.120000 audit[2108]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff59ce1c40 a2=0 a3=0 items=0 ppid=1983 pid=2108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:52.120000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 12:56:52.124000 audit[2110]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2110 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:56:52.124000 audit[2110]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffd5a568780 a2=0 a3=0 items=0 ppid=1983 pid=2110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:52.124000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 12:56:52.128000 audit[2112]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2112 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:56:52.128000 audit[2112]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fffeb2feb50 a2=0 a3=0 items=0 ppid=1983 pid=2112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:52.128000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:56:52.131000 audit[2114]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2114 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:56:52.131000 audit[2114]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe5fb10fb0 a2=0 a3=0 items=0 ppid=1983 pid=2114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:52.131000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 12:56:52.140000 audit[2119]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2119 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:56:52.140000 audit[2119]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd28293e80 a2=0 a3=0 items=0 ppid=1983 pid=2119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:52.140000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 12:56:52.156000 audit[2121]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2121 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:56:52.156000 audit[2121]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc5d6c8b10 a2=0 a3=0 items=0 ppid=1983 pid=2121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:52.156000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 12:56:52.159000 audit[2123]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2123 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:56:52.159000 audit[2123]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc36c96020 a2=0 a3=0 items=0 ppid=1983 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:52.159000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 12:56:52.162000 audit[2125]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2125 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:56:52.162000 audit[2125]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff321048d0 a2=0 a3=0 items=0 ppid=1983 pid=2125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:52.162000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 12:56:52.168000 audit[2127]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2127 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:56:52.168000 audit[2127]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc3e2d6b80 a2=0 a3=0 items=0 ppid=1983 pid=2127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:52.168000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 12:56:52.172000 audit[2129]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2129 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:56:52.172000 audit[2129]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc9517e9a0 a2=0 a3=0 items=0 ppid=1983 pid=2129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:52.172000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 12:56:52.188278 systemd-timesyncd[1519]: Network configuration changed, trying to establish connection. Dec 16 12:56:52.209000 audit[2133]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2133 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:56:52.209000 audit[2133]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7fff229dbd80 a2=0 a3=0 items=0 ppid=1983 pid=2133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:52.209000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 12:56:52.213000 audit[2135]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2135 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:56:52.213000 audit[2135]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff97e828e0 a2=0 a3=0 items=0 ppid=1983 pid=2135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:52.213000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 12:56:52.228000 audit[2143]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2143 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:56:52.228000 audit[2143]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffe11ff9a50 a2=0 a3=0 items=0 ppid=1983 pid=2143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:52.228000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 12:56:52.245000 audit[2149]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2149 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:56:52.245000 audit[2149]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff2a6b1a70 a2=0 a3=0 items=0 ppid=1983 pid=2149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:52.245000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 12:56:52.249000 audit[2151]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2151 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:56:52.249000 audit[2151]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffe1651fa40 a2=0 a3=0 items=0 ppid=1983 pid=2151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:52.249000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 12:56:52.253000 audit[2153]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2153 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:56:52.253000 audit[2153]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc22d209c0 a2=0 a3=0 items=0 ppid=1983 pid=2153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:52.253000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 12:56:52.256000 audit[2155]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2155 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:56:52.256000 audit[2155]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fff4cea44f0 a2=0 a3=0 items=0 ppid=1983 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:52.256000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:56:52.260000 audit[2157]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2157 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:56:52.260000 audit[2157]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc5a14c2a0 a2=0 a3=0 items=0 ppid=1983 pid=2157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:52.260000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 12:56:52.261758 systemd-networkd[1550]: docker0: Link UP Dec 16 12:56:52.277011 dockerd[1983]: time="2025-12-16T12:56:52.276763528Z" level=info msg="Loading containers: done." Dec 16 12:56:52.299619 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3976439437-merged.mount: Deactivated successfully. Dec 16 12:56:52.307287 dockerd[1983]: time="2025-12-16T12:56:52.307172301Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 12:56:52.307633 dockerd[1983]: time="2025-12-16T12:56:52.307348098Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 12:56:52.307633 dockerd[1983]: time="2025-12-16T12:56:52.307497110Z" level=info msg="Initializing buildkit" Dec 16 12:56:52.338165 dockerd[1983]: time="2025-12-16T12:56:52.338075433Z" level=info msg="Completed buildkit initialization" Dec 16 12:56:52.350317 dockerd[1983]: time="2025-12-16T12:56:52.350215000Z" level=info msg="Daemon has completed initialization" Dec 16 12:56:52.351129 dockerd[1983]: time="2025-12-16T12:56:52.350576461Z" level=info msg="API listen on /run/docker.sock" Dec 16 12:56:52.351688 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 12:56:52.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:53.142568 systemd-resolved[1300]: Clock change detected. Flushing caches. Dec 16 12:56:53.143075 systemd-timesyncd[1519]: Contacted time server [2a03:b0c0:1:d0::b1d:6001]:123 (2.flatcar.pool.ntp.org). Dec 16 12:56:53.143163 systemd-timesyncd[1519]: Initial clock synchronization to Tue 2025-12-16 12:56:53.142290 UTC. Dec 16 12:56:54.329792 containerd[1627]: time="2025-12-16T12:56:54.329656939Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 16 12:56:55.398509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2348650658.mount: Deactivated successfully. Dec 16 12:56:57.494991 containerd[1627]: time="2025-12-16T12:56:57.493707502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:56:57.495568 containerd[1627]: time="2025-12-16T12:56:57.495382885Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=28445968" Dec 16 12:56:57.496071 containerd[1627]: time="2025-12-16T12:56:57.496030672Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:56:57.509775 containerd[1627]: time="2025-12-16T12:56:57.509739065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:56:57.511284 containerd[1627]: time="2025-12-16T12:56:57.511237961Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 3.180554801s" Dec 16 12:56:57.511440 containerd[1627]: time="2025-12-16T12:56:57.511411902Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Dec 16 12:56:57.512846 containerd[1627]: time="2025-12-16T12:56:57.512794004Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 16 12:56:59.171726 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 16 12:56:59.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:59.183374 kernel: kauditd_printk_skb: 134 callbacks suppressed Dec 16 12:56:59.183533 kernel: audit: type=1131 audit(1765889819.173:284): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:59.192000 audit: BPF prog-id=61 op=UNLOAD Dec 16 12:56:59.194979 kernel: audit: type=1334 audit(1765889819.192:285): prog-id=61 op=UNLOAD Dec 16 12:57:00.262236 containerd[1627]: time="2025-12-16T12:57:00.262142625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:00.263999 containerd[1627]: time="2025-12-16T12:57:00.263847286Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Dec 16 12:57:00.265241 containerd[1627]: time="2025-12-16T12:57:00.265183585Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:00.270805 containerd[1627]: time="2025-12-16T12:57:00.270284621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:00.271168 containerd[1627]: time="2025-12-16T12:57:00.271129282Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 2.758279156s" Dec 16 12:57:00.271298 containerd[1627]: time="2025-12-16T12:57:00.271268217Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Dec 16 12:57:00.272482 containerd[1627]: time="2025-12-16T12:57:00.272449197Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 16 12:57:00.827569 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 12:57:00.831584 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:57:01.040237 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:57:01.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:01.047019 kernel: audit: type=1130 audit(1765889821.039:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:01.066605 (kubelet)[2271]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:57:01.155100 kubelet[2271]: E1216 12:57:01.153972 2271 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:57:01.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:57:01.159148 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:57:01.159390 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:57:01.160050 systemd[1]: kubelet.service: Consumed 258ms CPU time, 109.5M memory peak. Dec 16 12:57:01.164973 kernel: audit: type=1131 audit(1765889821.159:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:57:02.455938 containerd[1627]: time="2025-12-16T12:57:02.455803925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:02.459667 containerd[1627]: time="2025-12-16T12:57:02.459582450Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Dec 16 12:57:02.460947 containerd[1627]: time="2025-12-16T12:57:02.460912269Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:02.468168 containerd[1627]: time="2025-12-16T12:57:02.468084940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:02.469128 containerd[1627]: time="2025-12-16T12:57:02.469091117Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 2.196599756s" Dec 16 12:57:02.469310 containerd[1627]: time="2025-12-16T12:57:02.469249103Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Dec 16 12:57:02.470138 containerd[1627]: time="2025-12-16T12:57:02.470025959Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 16 12:57:04.365139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1617077095.mount: Deactivated successfully. Dec 16 12:57:05.462120 containerd[1627]: time="2025-12-16T12:57:05.462050810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:05.464681 containerd[1627]: time="2025-12-16T12:57:05.464628906Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31926374" Dec 16 12:57:05.465760 containerd[1627]: time="2025-12-16T12:57:05.465692164Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:05.469603 containerd[1627]: time="2025-12-16T12:57:05.469507871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:05.471751 containerd[1627]: time="2025-12-16T12:57:05.471590308Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 3.001517527s" Dec 16 12:57:05.471751 containerd[1627]: time="2025-12-16T12:57:05.471635312Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Dec 16 12:57:05.472975 containerd[1627]: time="2025-12-16T12:57:05.472711577Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 16 12:57:06.148499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3244184996.mount: Deactivated successfully. Dec 16 12:57:09.699772 containerd[1627]: time="2025-12-16T12:57:09.699691620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:09.701927 containerd[1627]: time="2025-12-16T12:57:09.701878926Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20128467" Dec 16 12:57:09.702151 containerd[1627]: time="2025-12-16T12:57:09.702115173Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:09.705881 containerd[1627]: time="2025-12-16T12:57:09.705834445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:09.707628 containerd[1627]: time="2025-12-16T12:57:09.707551463Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 4.234789913s" Dec 16 12:57:09.707804 containerd[1627]: time="2025-12-16T12:57:09.707774097Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Dec 16 12:57:09.709038 containerd[1627]: time="2025-12-16T12:57:09.708992911Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 12:57:10.475038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1205491001.mount: Deactivated successfully. Dec 16 12:57:10.483430 containerd[1627]: time="2025-12-16T12:57:10.482425418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:57:10.483430 containerd[1627]: time="2025-12-16T12:57:10.483383232Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 12:57:10.484122 containerd[1627]: time="2025-12-16T12:57:10.484086343Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:57:10.486845 containerd[1627]: time="2025-12-16T12:57:10.486800059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:57:10.487682 containerd[1627]: time="2025-12-16T12:57:10.487639387Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 778.460472ms" Dec 16 12:57:10.487774 containerd[1627]: time="2025-12-16T12:57:10.487684999Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 12:57:10.488877 containerd[1627]: time="2025-12-16T12:57:10.488820239Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 16 12:57:10.971172 update_engine[1598]: I20251216 12:57:10.971004 1598 update_attempter.cc:509] Updating boot flags... Dec 16 12:57:11.211858 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 16 12:57:11.272180 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:57:11.485830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1308825009.mount: Deactivated successfully. Dec 16 12:57:11.589676 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:57:11.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:11.597989 kernel: audit: type=1130 audit(1765889831.588:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:11.614519 (kubelet)[2376]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:57:11.758417 kubelet[2376]: E1216 12:57:11.758346 2376 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:57:11.763345 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:57:11.763681 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:57:11.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:57:11.771379 systemd[1]: kubelet.service: Consumed 244ms CPU time, 109.5M memory peak. Dec 16 12:57:11.772045 kernel: audit: type=1131 audit(1765889831.763:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:57:14.968527 containerd[1627]: time="2025-12-16T12:57:14.968419702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:14.970768 containerd[1627]: time="2025-12-16T12:57:14.970349293Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58133672" Dec 16 12:57:14.971650 containerd[1627]: time="2025-12-16T12:57:14.971610848Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:14.990628 containerd[1627]: time="2025-12-16T12:57:14.990558277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:14.993752 containerd[1627]: time="2025-12-16T12:57:14.993684926Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.504641036s" Dec 16 12:57:14.993752 containerd[1627]: time="2025-12-16T12:57:14.993747484Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Dec 16 12:57:19.967795 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:57:19.968135 systemd[1]: kubelet.service: Consumed 244ms CPU time, 109.5M memory peak. Dec 16 12:57:19.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:19.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:19.991288 kernel: audit: type=1130 audit(1765889839.967:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:19.991475 kernel: audit: type=1131 audit(1765889839.967:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:19.979269 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:57:20.028617 systemd[1]: Reload requested from client PID 2462 ('systemctl') (unit session-12.scope)... Dec 16 12:57:20.028674 systemd[1]: Reloading... Dec 16 12:57:20.222982 zram_generator::config[2516]: No configuration found. Dec 16 12:57:20.563717 systemd[1]: Reloading finished in 534 ms. Dec 16 12:57:20.611979 kernel: audit: type=1334 audit(1765889840.606:292): prog-id=65 op=LOAD Dec 16 12:57:20.606000 audit: BPF prog-id=65 op=LOAD Dec 16 12:57:20.606000 audit: BPF prog-id=58 op=UNLOAD Dec 16 12:57:20.615984 kernel: audit: type=1334 audit(1765889840.606:293): prog-id=58 op=UNLOAD Dec 16 12:57:20.610000 audit: BPF prog-id=66 op=LOAD Dec 16 12:57:20.610000 audit: BPF prog-id=67 op=LOAD Dec 16 12:57:20.619624 kernel: audit: type=1334 audit(1765889840.610:294): prog-id=66 op=LOAD Dec 16 12:57:20.619743 kernel: audit: type=1334 audit(1765889840.610:295): prog-id=67 op=LOAD Dec 16 12:57:20.619801 kernel: audit: type=1334 audit(1765889840.610:296): prog-id=59 op=UNLOAD Dec 16 12:57:20.610000 audit: BPF prog-id=59 op=UNLOAD Dec 16 12:57:20.610000 audit: BPF prog-id=60 op=UNLOAD Dec 16 12:57:20.622467 kernel: audit: type=1334 audit(1765889840.610:297): prog-id=60 op=UNLOAD Dec 16 12:57:20.622566 kernel: audit: type=1334 audit(1765889840.611:298): prog-id=68 op=LOAD Dec 16 12:57:20.611000 audit: BPF prog-id=68 op=LOAD Dec 16 12:57:20.625270 kernel: audit: type=1334 audit(1765889840.611:299): prog-id=57 op=UNLOAD Dec 16 12:57:20.611000 audit: BPF prog-id=57 op=UNLOAD Dec 16 12:57:20.612000 audit: BPF prog-id=69 op=LOAD Dec 16 12:57:20.612000 audit: BPF prog-id=44 op=UNLOAD Dec 16 12:57:20.628000 audit: BPF prog-id=70 op=LOAD Dec 16 12:57:20.628000 audit: BPF prog-id=48 op=UNLOAD Dec 16 12:57:20.628000 audit: BPF prog-id=71 op=LOAD Dec 16 12:57:20.629000 audit: BPF prog-id=72 op=LOAD Dec 16 12:57:20.629000 audit: BPF prog-id=49 op=UNLOAD Dec 16 12:57:20.629000 audit: BPF prog-id=50 op=UNLOAD Dec 16 12:57:20.630000 audit: BPF prog-id=73 op=LOAD Dec 16 12:57:20.630000 audit: BPF prog-id=41 op=UNLOAD Dec 16 12:57:20.630000 audit: BPF prog-id=74 op=LOAD Dec 16 12:57:20.630000 audit: BPF prog-id=75 op=LOAD Dec 16 12:57:20.630000 audit: BPF prog-id=42 op=UNLOAD Dec 16 12:57:20.630000 audit: BPF prog-id=43 op=UNLOAD Dec 16 12:57:20.632000 audit: BPF prog-id=76 op=LOAD Dec 16 12:57:20.632000 audit: BPF prog-id=45 op=UNLOAD Dec 16 12:57:20.634000 audit: BPF prog-id=77 op=LOAD Dec 16 12:57:20.634000 audit: BPF prog-id=78 op=LOAD Dec 16 12:57:20.634000 audit: BPF prog-id=46 op=UNLOAD Dec 16 12:57:20.634000 audit: BPF prog-id=47 op=UNLOAD Dec 16 12:57:20.634000 audit: BPF prog-id=79 op=LOAD Dec 16 12:57:20.634000 audit: BPF prog-id=80 op=LOAD Dec 16 12:57:20.634000 audit: BPF prog-id=54 op=UNLOAD Dec 16 12:57:20.634000 audit: BPF prog-id=55 op=UNLOAD Dec 16 12:57:20.637000 audit: BPF prog-id=81 op=LOAD Dec 16 12:57:20.637000 audit: BPF prog-id=56 op=UNLOAD Dec 16 12:57:20.638000 audit: BPF prog-id=82 op=LOAD Dec 16 12:57:20.638000 audit: BPF prog-id=51 op=UNLOAD Dec 16 12:57:20.638000 audit: BPF prog-id=83 op=LOAD Dec 16 12:57:20.638000 audit: BPF prog-id=84 op=LOAD Dec 16 12:57:20.638000 audit: BPF prog-id=52 op=UNLOAD Dec 16 12:57:20.638000 audit: BPF prog-id=53 op=UNLOAD Dec 16 12:57:20.639000 audit: BPF prog-id=85 op=LOAD Dec 16 12:57:20.639000 audit: BPF prog-id=64 op=UNLOAD Dec 16 12:57:20.661084 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 12:57:20.661218 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 12:57:20.661697 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:57:20.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:57:20.661795 systemd[1]: kubelet.service: Consumed 177ms CPU time, 97.9M memory peak. Dec 16 12:57:20.665015 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:57:20.852551 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:57:20.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:20.866493 (kubelet)[2578]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:57:20.974464 kubelet[2578]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:57:20.975057 kubelet[2578]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:57:20.975167 kubelet[2578]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:57:20.978851 kubelet[2578]: I1216 12:57:20.978748 2578 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:57:21.884071 kubelet[2578]: I1216 12:57:21.884003 2578 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 12:57:21.884071 kubelet[2578]: I1216 12:57:21.884055 2578 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:57:21.885991 kubelet[2578]: I1216 12:57:21.884739 2578 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:57:21.938031 kubelet[2578]: I1216 12:57:21.937520 2578 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:57:21.941393 kubelet[2578]: E1216 12:57:21.940329 2578 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.244.27.222:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.27.222:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 12:57:21.979436 kubelet[2578]: I1216 12:57:21.979391 2578 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:57:21.990417 kubelet[2578]: I1216 12:57:21.990267 2578 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:57:21.997695 kubelet[2578]: I1216 12:57:21.997416 2578 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:57:22.001811 kubelet[2578]: I1216 12:57:21.997483 2578 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-gtzk5.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:57:22.004464 kubelet[2578]: I1216 12:57:22.004365 2578 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:57:22.004464 kubelet[2578]: I1216 12:57:22.004404 2578 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 12:57:22.005607 kubelet[2578]: I1216 12:57:22.005553 2578 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:57:22.009365 kubelet[2578]: I1216 12:57:22.008866 2578 kubelet.go:480] "Attempting to sync node with API server" Dec 16 12:57:22.009365 kubelet[2578]: I1216 12:57:22.008903 2578 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:57:22.009365 kubelet[2578]: I1216 12:57:22.008978 2578 kubelet.go:386] "Adding apiserver pod source" Dec 16 12:57:22.009365 kubelet[2578]: I1216 12:57:22.009008 2578 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:57:22.018991 kubelet[2578]: E1216 12:57:22.018568 2578 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.244.27.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-gtzk5.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.27.222:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:57:22.019357 kubelet[2578]: I1216 12:57:22.019331 2578 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 12:57:22.022028 kubelet[2578]: I1216 12:57:22.022001 2578 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:57:22.037939 kubelet[2578]: W1216 12:57:22.037891 2578 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 12:57:22.053860 kubelet[2578]: I1216 12:57:22.053820 2578 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:57:22.054139 kubelet[2578]: I1216 12:57:22.054119 2578 server.go:1289] "Started kubelet" Dec 16 12:57:22.056158 kubelet[2578]: E1216 12:57:22.056097 2578 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.244.27.222:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.27.222:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:57:22.063007 kubelet[2578]: I1216 12:57:22.062915 2578 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:57:22.065040 kubelet[2578]: I1216 12:57:22.065003 2578 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:57:22.067198 kubelet[2578]: I1216 12:57:22.067170 2578 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:57:22.069265 kubelet[2578]: I1216 12:57:22.069227 2578 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:57:22.072902 kubelet[2578]: I1216 12:57:22.072877 2578 server.go:317] "Adding debug handlers to kubelet server" Dec 16 12:57:22.081758 kubelet[2578]: I1216 12:57:22.081724 2578 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:57:22.082843 kubelet[2578]: E1216 12:57:22.077916 2578 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.27.222:6443/api/v1/namespaces/default/events\": dial tcp 10.244.27.222:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-gtzk5.gb1.brightbox.com.1881b3748c1a5843 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-gtzk5.gb1.brightbox.com,UID:srv-gtzk5.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-gtzk5.gb1.brightbox.com,},FirstTimestamp:2025-12-16 12:57:22.054060099 +0000 UTC m=+1.178389252,LastTimestamp:2025-12-16 12:57:22.054060099 +0000 UTC m=+1.178389252,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-gtzk5.gb1.brightbox.com,}" Dec 16 12:57:22.090250 kubelet[2578]: E1216 12:57:22.090121 2578 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-gtzk5.gb1.brightbox.com\" not found" Dec 16 12:57:22.090363 kubelet[2578]: I1216 12:57:22.090226 2578 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:57:22.092945 kubelet[2578]: I1216 12:57:22.090514 2578 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:57:22.093819 kubelet[2578]: I1216 12:57:22.093797 2578 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:57:22.096424 kubelet[2578]: E1216 12:57:22.096390 2578 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.244.27.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.27.222:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:57:22.098132 kubelet[2578]: E1216 12:57:22.097381 2578 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.27.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-gtzk5.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.27.222:6443: connect: connection refused" interval="200ms" Dec 16 12:57:22.098258 kubelet[2578]: I1216 12:57:22.097823 2578 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:57:22.098600 kubelet[2578]: I1216 12:57:22.098572 2578 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:57:22.100000 audit[2594]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2594 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:22.100000 audit[2594]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffff0a7d370 a2=0 a3=0 items=0 ppid=2578 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:22.100000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 12:57:22.102212 kubelet[2578]: E1216 12:57:22.102169 2578 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:57:22.102986 kubelet[2578]: I1216 12:57:22.102478 2578 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:57:22.106000 audit[2595]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2595 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:22.106000 audit[2595]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0aa645c0 a2=0 a3=0 items=0 ppid=2578 pid=2595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:22.106000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 12:57:22.119000 audit[2601]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2601 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:22.119000 audit[2601]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffed9437560 a2=0 a3=0 items=0 ppid=2578 pid=2601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:22.119000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:57:22.126000 audit[2604]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2604 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:22.126000 audit[2604]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe5f9e2160 a2=0 a3=0 items=0 ppid=2578 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:22.126000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:57:22.133660 kubelet[2578]: I1216 12:57:22.133619 2578 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:57:22.133660 kubelet[2578]: I1216 12:57:22.133649 2578 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:57:22.136763 kubelet[2578]: I1216 12:57:22.133690 2578 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:57:22.150979 kubelet[2578]: I1216 12:57:22.150607 2578 policy_none.go:49] "None policy: Start" Dec 16 12:57:22.150979 kubelet[2578]: I1216 12:57:22.150665 2578 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:57:22.150979 kubelet[2578]: I1216 12:57:22.150696 2578 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:57:22.158000 audit[2608]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2608 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:22.158000 audit[2608]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffd4fcefec0 a2=0 a3=0 items=0 ppid=2578 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:22.162135 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 12:57:22.158000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 16 12:57:22.164701 kubelet[2578]: I1216 12:57:22.164661 2578 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 12:57:22.165000 audit[2611]: NETFILTER_CFG table=mangle:47 family=2 entries=1 op=nft_register_chain pid=2611 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:22.165000 audit[2611]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc69ec28f0 a2=0 a3=0 items=0 ppid=2578 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:22.165000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 12:57:22.167000 audit[2612]: NETFILTER_CFG table=nat:48 family=2 entries=1 op=nft_register_chain pid=2612 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:22.167000 audit[2612]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff309e9bf0 a2=0 a3=0 items=0 ppid=2578 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:22.167000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 12:57:22.168000 audit[2610]: NETFILTER_CFG table=mangle:49 family=10 entries=2 op=nft_register_chain pid=2610 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:22.168000 audit[2610]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe58445750 a2=0 a3=0 items=0 ppid=2578 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:22.168000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 12:57:22.169474 kubelet[2578]: I1216 12:57:22.169448 2578 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 12:57:22.169723 kubelet[2578]: I1216 12:57:22.169497 2578 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 12:57:22.169723 kubelet[2578]: I1216 12:57:22.169719 2578 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:57:22.169862 kubelet[2578]: I1216 12:57:22.169740 2578 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 12:57:22.169862 kubelet[2578]: E1216 12:57:22.169806 2578 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:57:22.170000 audit[2613]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2613 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:22.170000 audit[2613]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc1dc39960 a2=0 a3=0 items=0 ppid=2578 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:22.170000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 12:57:22.173000 audit[2614]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2614 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:22.173000 audit[2614]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffda4cecc80 a2=0 a3=0 items=0 ppid=2578 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:22.173000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 12:57:22.173000 audit[2615]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2615 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:22.173000 audit[2615]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc697dbde0 a2=0 a3=0 items=0 ppid=2578 pid=2615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:22.173000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 12:57:22.175447 kubelet[2578]: E1216 12:57:22.175234 2578 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.244.27.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.27.222:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:57:22.176000 audit[2616]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2616 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:22.176000 audit[2616]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe4e132e60 a2=0 a3=0 items=0 ppid=2578 pid=2616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:22.176000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 12:57:22.180339 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 12:57:22.185809 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 12:57:22.191023 kubelet[2578]: E1216 12:57:22.190990 2578 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-gtzk5.gb1.brightbox.com\" not found" Dec 16 12:57:22.196827 kubelet[2578]: E1216 12:57:22.196759 2578 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:57:22.197711 kubelet[2578]: I1216 12:57:22.197688 2578 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:57:22.197903 kubelet[2578]: I1216 12:57:22.197845 2578 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:57:22.199219 kubelet[2578]: I1216 12:57:22.199181 2578 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:57:22.202233 kubelet[2578]: E1216 12:57:22.202145 2578 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:57:22.202233 kubelet[2578]: E1216 12:57:22.202227 2578 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-gtzk5.gb1.brightbox.com\" not found" Dec 16 12:57:22.288808 systemd[1]: Created slice kubepods-burstable-pod0e8c55a37e677e7fec0024e27091a053.slice - libcontainer container kubepods-burstable-pod0e8c55a37e677e7fec0024e27091a053.slice. Dec 16 12:57:22.299053 kubelet[2578]: E1216 12:57:22.298994 2578 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gtzk5.gb1.brightbox.com\" not found" node="srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:22.302140 kubelet[2578]: I1216 12:57:22.302102 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0e8c55a37e677e7fec0024e27091a053-ca-certs\") pod \"kube-apiserver-srv-gtzk5.gb1.brightbox.com\" (UID: \"0e8c55a37e677e7fec0024e27091a053\") " pod="kube-system/kube-apiserver-srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:22.302254 kubelet[2578]: I1216 12:57:22.302161 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0e8c55a37e677e7fec0024e27091a053-usr-share-ca-certificates\") pod \"kube-apiserver-srv-gtzk5.gb1.brightbox.com\" (UID: \"0e8c55a37e677e7fec0024e27091a053\") " pod="kube-system/kube-apiserver-srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:22.302254 kubelet[2578]: I1216 12:57:22.302200 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b63cf051b55b7d9af43465545df1004-k8s-certs\") pod \"kube-controller-manager-srv-gtzk5.gb1.brightbox.com\" (UID: \"1b63cf051b55b7d9af43465545df1004\") " pod="kube-system/kube-controller-manager-srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:22.302254 kubelet[2578]: I1216 12:57:22.302245 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1b63cf051b55b7d9af43465545df1004-kubeconfig\") pod \"kube-controller-manager-srv-gtzk5.gb1.brightbox.com\" (UID: \"1b63cf051b55b7d9af43465545df1004\") " pod="kube-system/kube-controller-manager-srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:22.302416 kubelet[2578]: I1216 12:57:22.302317 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b63cf051b55b7d9af43465545df1004-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-gtzk5.gb1.brightbox.com\" (UID: \"1b63cf051b55b7d9af43465545df1004\") " pod="kube-system/kube-controller-manager-srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:22.302416 kubelet[2578]: I1216 12:57:22.302356 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fc0d2f8e385063f35b6e4d6f14df0b48-kubeconfig\") pod \"kube-scheduler-srv-gtzk5.gb1.brightbox.com\" (UID: \"fc0d2f8e385063f35b6e4d6f14df0b48\") " pod="kube-system/kube-scheduler-srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:22.302416 kubelet[2578]: I1216 12:57:22.302388 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0e8c55a37e677e7fec0024e27091a053-k8s-certs\") pod \"kube-apiserver-srv-gtzk5.gb1.brightbox.com\" (UID: \"0e8c55a37e677e7fec0024e27091a053\") " pod="kube-system/kube-apiserver-srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:22.302560 kubelet[2578]: I1216 12:57:22.302422 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b63cf051b55b7d9af43465545df1004-ca-certs\") pod \"kube-controller-manager-srv-gtzk5.gb1.brightbox.com\" (UID: \"1b63cf051b55b7d9af43465545df1004\") " pod="kube-system/kube-controller-manager-srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:22.302560 kubelet[2578]: I1216 12:57:22.302459 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1b63cf051b55b7d9af43465545df1004-flexvolume-dir\") pod \"kube-controller-manager-srv-gtzk5.gb1.brightbox.com\" (UID: \"1b63cf051b55b7d9af43465545df1004\") " pod="kube-system/kube-controller-manager-srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:22.303913 kubelet[2578]: E1216 12:57:22.303864 2578 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.27.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-gtzk5.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.27.222:6443: connect: connection refused" interval="400ms" Dec 16 12:57:22.304169 kubelet[2578]: I1216 12:57:22.304132 2578 kubelet_node_status.go:75] "Attempting to register node" node="srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:22.304760 kubelet[2578]: E1216 12:57:22.304725 2578 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.27.222:6443/api/v1/nodes\": dial tcp 10.244.27.222:6443: connect: connection refused" node="srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:22.313001 systemd[1]: Created slice kubepods-burstable-pod1b63cf051b55b7d9af43465545df1004.slice - libcontainer container kubepods-burstable-pod1b63cf051b55b7d9af43465545df1004.slice. Dec 16 12:57:22.316942 kubelet[2578]: E1216 12:57:22.316912 2578 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gtzk5.gb1.brightbox.com\" not found" node="srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:22.320968 systemd[1]: Created slice kubepods-burstable-podfc0d2f8e385063f35b6e4d6f14df0b48.slice - libcontainer container kubepods-burstable-podfc0d2f8e385063f35b6e4d6f14df0b48.slice. Dec 16 12:57:22.324250 kubelet[2578]: E1216 12:57:22.324011 2578 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gtzk5.gb1.brightbox.com\" not found" node="srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:22.507645 kubelet[2578]: I1216 12:57:22.507510 2578 kubelet_node_status.go:75] "Attempting to register node" node="srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:22.509144 kubelet[2578]: E1216 12:57:22.509103 2578 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.27.222:6443/api/v1/nodes\": dial tcp 10.244.27.222:6443: connect: connection refused" node="srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:22.602914 containerd[1627]: time="2025-12-16T12:57:22.602803061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-gtzk5.gb1.brightbox.com,Uid:0e8c55a37e677e7fec0024e27091a053,Namespace:kube-system,Attempt:0,}" Dec 16 12:57:22.620042 containerd[1627]: time="2025-12-16T12:57:22.619825899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-gtzk5.gb1.brightbox.com,Uid:1b63cf051b55b7d9af43465545df1004,Namespace:kube-system,Attempt:0,}" Dec 16 12:57:22.625946 containerd[1627]: time="2025-12-16T12:57:22.625843326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-gtzk5.gb1.brightbox.com,Uid:fc0d2f8e385063f35b6e4d6f14df0b48,Namespace:kube-system,Attempt:0,}" Dec 16 12:57:22.705357 kubelet[2578]: E1216 12:57:22.705261 2578 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.27.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-gtzk5.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.27.222:6443: connect: connection refused" interval="800ms" Dec 16 12:57:22.764936 containerd[1627]: time="2025-12-16T12:57:22.764775983Z" level=info msg="connecting to shim bcf6dab2dbe98dbfabe39b3671ccedd27ff1c7b66b249a270175ce517934b615" address="unix:///run/containerd/s/bdbe050813958d5bb72ea72eaebbd7a16f37a920a6a3c2b1878b80b1572cd9aa" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:57:22.769978 containerd[1627]: time="2025-12-16T12:57:22.769665376Z" level=info msg="connecting to shim 09623d98eb89084c44e40bba1da81795309c401ea7d35d6efe036aa5f3570dca" address="unix:///run/containerd/s/58168c81333236073ace7b47e58c8a5aa6cb284785ae1b8c70493369c56c0d36" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:57:22.773154 containerd[1627]: time="2025-12-16T12:57:22.773098406Z" level=info msg="connecting to shim 97671893cdb057602af4b219bd92a36ca28d44b306ba45183b1db61d2a1f91c0" address="unix:///run/containerd/s/e9b78e41ab33a4fa93c6bed8610b5cce2024bb16ed011bb9c1966cc0caeace2e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:57:22.914469 kubelet[2578]: I1216 12:57:22.914346 2578 kubelet_node_status.go:75] "Attempting to register node" node="srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:22.915503 kubelet[2578]: E1216 12:57:22.915452 2578 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.27.222:6443/api/v1/nodes\": dial tcp 10.244.27.222:6443: connect: connection refused" node="srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:22.935441 systemd[1]: Started cri-containerd-09623d98eb89084c44e40bba1da81795309c401ea7d35d6efe036aa5f3570dca.scope - libcontainer container 09623d98eb89084c44e40bba1da81795309c401ea7d35d6efe036aa5f3570dca. Dec 16 12:57:22.939245 systemd[1]: Started cri-containerd-97671893cdb057602af4b219bd92a36ca28d44b306ba45183b1db61d2a1f91c0.scope - libcontainer container 97671893cdb057602af4b219bd92a36ca28d44b306ba45183b1db61d2a1f91c0. Dec 16 12:57:22.944246 systemd[1]: Started cri-containerd-bcf6dab2dbe98dbfabe39b3671ccedd27ff1c7b66b249a270175ce517934b615.scope - libcontainer container bcf6dab2dbe98dbfabe39b3671ccedd27ff1c7b66b249a270175ce517934b615. Dec 16 12:57:22.977000 audit: BPF prog-id=86 op=LOAD Dec 16 12:57:22.978000 audit: BPF prog-id=87 op=LOAD Dec 16 12:57:22.978000 audit[2675]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2638 pid=2675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:22.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263663664616232646265393864626661626533396233363731636365 Dec 16 12:57:22.978000 audit: BPF prog-id=87 op=UNLOAD Dec 16 12:57:22.978000 audit[2675]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2638 pid=2675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:22.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263663664616232646265393864626661626533396233363731636365 Dec 16 12:57:22.979000 audit: BPF prog-id=88 op=LOAD Dec 16 12:57:22.979000 audit[2675]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2638 pid=2675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:22.979000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263663664616232646265393864626661626533396233363731636365 Dec 16 12:57:22.980000 audit: BPF prog-id=89 op=LOAD Dec 16 12:57:22.980000 audit[2675]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2638 pid=2675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:22.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263663664616232646265393864626661626533396233363731636365 Dec 16 12:57:22.980000 audit: BPF prog-id=89 op=UNLOAD Dec 16 12:57:22.980000 audit[2675]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2638 pid=2675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:22.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263663664616232646265393864626661626533396233363731636365 Dec 16 12:57:22.980000 audit: BPF prog-id=88 op=UNLOAD Dec 16 12:57:22.980000 audit[2675]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2638 pid=2675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:22.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263663664616232646265393864626661626533396233363731636365 Dec 16 12:57:22.980000 audit: BPF prog-id=90 op=LOAD Dec 16 12:57:22.980000 audit[2675]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2638 pid=2675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:22.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263663664616232646265393864626661626533396233363731636365 Dec 16 12:57:22.985000 audit: BPF prog-id=91 op=LOAD Dec 16 12:57:22.988000 audit: BPF prog-id=92 op=LOAD Dec 16 12:57:22.988000 audit[2677]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2645 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:22.988000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937363731383933636462303537363032616634623231396264393261 Dec 16 12:57:22.989000 audit: BPF prog-id=92 op=UNLOAD Dec 16 12:57:22.989000 audit[2677]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2645 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:22.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937363731383933636462303537363032616634623231396264393261 Dec 16 12:57:22.989000 audit: BPF prog-id=93 op=LOAD Dec 16 12:57:22.989000 audit[2677]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2645 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:22.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937363731383933636462303537363032616634623231396264393261 Dec 16 12:57:22.989000 audit: BPF prog-id=94 op=LOAD Dec 16 12:57:22.989000 audit[2677]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2645 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:22.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937363731383933636462303537363032616634623231396264393261 Dec 16 12:57:22.989000 audit: BPF prog-id=94 op=UNLOAD Dec 16 12:57:22.989000 audit[2677]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2645 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:22.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937363731383933636462303537363032616634623231396264393261 Dec 16 12:57:22.989000 audit: BPF prog-id=93 op=UNLOAD Dec 16 12:57:22.989000 audit[2677]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2645 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:22.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937363731383933636462303537363032616634623231396264393261 Dec 16 12:57:22.989000 audit: BPF prog-id=95 op=LOAD Dec 16 12:57:22.989000 audit[2677]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2645 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:22.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937363731383933636462303537363032616634623231396264393261 Dec 16 12:57:22.996641 kubelet[2578]: E1216 12:57:22.992386 2578 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.244.27.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.27.222:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:57:22.999000 audit: BPF prog-id=96 op=LOAD Dec 16 12:57:23.000000 audit: BPF prog-id=97 op=LOAD Dec 16 12:57:23.000000 audit[2673]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2643 pid=2673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:23.000000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039363233643938656238393038346334346534306262613164613831 Dec 16 12:57:23.000000 audit: BPF prog-id=97 op=UNLOAD Dec 16 12:57:23.000000 audit[2673]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2643 pid=2673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:23.000000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039363233643938656238393038346334346534306262613164613831 Dec 16 12:57:23.002000 audit: BPF prog-id=98 op=LOAD Dec 16 12:57:23.002000 audit[2673]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2643 pid=2673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:23.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039363233643938656238393038346334346534306262613164613831 Dec 16 12:57:23.002000 audit: BPF prog-id=99 op=LOAD Dec 16 12:57:23.002000 audit[2673]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2643 pid=2673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:23.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039363233643938656238393038346334346534306262613164613831 Dec 16 12:57:23.002000 audit: BPF prog-id=99 op=UNLOAD Dec 16 12:57:23.002000 audit[2673]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2643 pid=2673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:23.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039363233643938656238393038346334346534306262613164613831 Dec 16 12:57:23.002000 audit: BPF prog-id=98 op=UNLOAD Dec 16 12:57:23.002000 audit[2673]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2643 pid=2673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:23.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039363233643938656238393038346334346534306262613164613831 Dec 16 12:57:23.002000 audit: BPF prog-id=100 op=LOAD Dec 16 12:57:23.002000 audit[2673]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2643 pid=2673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:23.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039363233643938656238393038346334346534306262613164613831 Dec 16 12:57:23.087102 containerd[1627]: time="2025-12-16T12:57:23.087015289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-gtzk5.gb1.brightbox.com,Uid:0e8c55a37e677e7fec0024e27091a053,Namespace:kube-system,Attempt:0,} returns sandbox id \"bcf6dab2dbe98dbfabe39b3671ccedd27ff1c7b66b249a270175ce517934b615\"" Dec 16 12:57:23.107314 containerd[1627]: time="2025-12-16T12:57:23.107243892Z" level=info msg="CreateContainer within sandbox \"bcf6dab2dbe98dbfabe39b3671ccedd27ff1c7b66b249a270175ce517934b615\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 12:57:23.108729 containerd[1627]: time="2025-12-16T12:57:23.107475960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-gtzk5.gb1.brightbox.com,Uid:fc0d2f8e385063f35b6e4d6f14df0b48,Namespace:kube-system,Attempt:0,} returns sandbox id \"09623d98eb89084c44e40bba1da81795309c401ea7d35d6efe036aa5f3570dca\"" Dec 16 12:57:23.117372 containerd[1627]: time="2025-12-16T12:57:23.117302508Z" level=info msg="CreateContainer within sandbox \"09623d98eb89084c44e40bba1da81795309c401ea7d35d6efe036aa5f3570dca\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 12:57:23.134042 containerd[1627]: time="2025-12-16T12:57:23.132924286Z" level=info msg="Container 2844505921ca5a281f329c147796e55baabfb346b3567df99c2d3bec07cdb9e7: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:57:23.134042 containerd[1627]: time="2025-12-16T12:57:23.133353406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-gtzk5.gb1.brightbox.com,Uid:1b63cf051b55b7d9af43465545df1004,Namespace:kube-system,Attempt:0,} returns sandbox id \"97671893cdb057602af4b219bd92a36ca28d44b306ba45183b1db61d2a1f91c0\"" Dec 16 12:57:23.139109 containerd[1627]: time="2025-12-16T12:57:23.139051401Z" level=info msg="CreateContainer within sandbox \"97671893cdb057602af4b219bd92a36ca28d44b306ba45183b1db61d2a1f91c0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 12:57:23.139969 containerd[1627]: time="2025-12-16T12:57:23.139918893Z" level=info msg="Container 507fed7722fe265874002a389714377d197c12ed28fc63799f0867390310042c: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:57:23.146175 containerd[1627]: time="2025-12-16T12:57:23.146019318Z" level=info msg="CreateContainer within sandbox \"bcf6dab2dbe98dbfabe39b3671ccedd27ff1c7b66b249a270175ce517934b615\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2844505921ca5a281f329c147796e55baabfb346b3567df99c2d3bec07cdb9e7\"" Dec 16 12:57:23.147398 containerd[1627]: time="2025-12-16T12:57:23.146832436Z" level=info msg="StartContainer for \"2844505921ca5a281f329c147796e55baabfb346b3567df99c2d3bec07cdb9e7\"" Dec 16 12:57:23.154573 containerd[1627]: time="2025-12-16T12:57:23.154510825Z" level=info msg="connecting to shim 2844505921ca5a281f329c147796e55baabfb346b3567df99c2d3bec07cdb9e7" address="unix:///run/containerd/s/bdbe050813958d5bb72ea72eaebbd7a16f37a920a6a3c2b1878b80b1572cd9aa" protocol=ttrpc version=3 Dec 16 12:57:23.167670 containerd[1627]: time="2025-12-16T12:57:23.167621083Z" level=info msg="CreateContainer within sandbox \"09623d98eb89084c44e40bba1da81795309c401ea7d35d6efe036aa5f3570dca\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"507fed7722fe265874002a389714377d197c12ed28fc63799f0867390310042c\"" Dec 16 12:57:23.172082 containerd[1627]: time="2025-12-16T12:57:23.172045298Z" level=info msg="StartContainer for \"507fed7722fe265874002a389714377d197c12ed28fc63799f0867390310042c\"" Dec 16 12:57:23.174588 containerd[1627]: time="2025-12-16T12:57:23.174544561Z" level=info msg="Container 358023c3139e79749fc55efca4759a35671b785655da9e8037c91fde6f0e5558: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:57:23.175616 containerd[1627]: time="2025-12-16T12:57:23.175582199Z" level=info msg="connecting to shim 507fed7722fe265874002a389714377d197c12ed28fc63799f0867390310042c" address="unix:///run/containerd/s/58168c81333236073ace7b47e58c8a5aa6cb284785ae1b8c70493369c56c0d36" protocol=ttrpc version=3 Dec 16 12:57:23.201406 containerd[1627]: time="2025-12-16T12:57:23.201009342Z" level=info msg="CreateContainer within sandbox \"97671893cdb057602af4b219bd92a36ca28d44b306ba45183b1db61d2a1f91c0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"358023c3139e79749fc55efca4759a35671b785655da9e8037c91fde6f0e5558\"" Dec 16 12:57:23.201881 containerd[1627]: time="2025-12-16T12:57:23.201845553Z" level=info msg="StartContainer for \"358023c3139e79749fc55efca4759a35671b785655da9e8037c91fde6f0e5558\"" Dec 16 12:57:23.203047 systemd[1]: Started cri-containerd-2844505921ca5a281f329c147796e55baabfb346b3567df99c2d3bec07cdb9e7.scope - libcontainer container 2844505921ca5a281f329c147796e55baabfb346b3567df99c2d3bec07cdb9e7. Dec 16 12:57:23.212571 containerd[1627]: time="2025-12-16T12:57:23.212245452Z" level=info msg="connecting to shim 358023c3139e79749fc55efca4759a35671b785655da9e8037c91fde6f0e5558" address="unix:///run/containerd/s/e9b78e41ab33a4fa93c6bed8610b5cce2024bb16ed011bb9c1966cc0caeace2e" protocol=ttrpc version=3 Dec 16 12:57:23.227244 systemd[1]: Started cri-containerd-507fed7722fe265874002a389714377d197c12ed28fc63799f0867390310042c.scope - libcontainer container 507fed7722fe265874002a389714377d197c12ed28fc63799f0867390310042c. Dec 16 12:57:23.248000 audit: BPF prog-id=101 op=LOAD Dec 16 12:57:23.250000 audit: BPF prog-id=102 op=LOAD Dec 16 12:57:23.250000 audit[2755]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2638 pid=2755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:23.250000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238343435303539323163613561323831663332396331343737393665 Dec 16 12:57:23.251000 audit: BPF prog-id=102 op=UNLOAD Dec 16 12:57:23.251000 audit[2755]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2638 pid=2755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:23.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238343435303539323163613561323831663332396331343737393665 Dec 16 12:57:23.251000 audit: BPF prog-id=103 op=LOAD Dec 16 12:57:23.251000 audit[2755]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2638 pid=2755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:23.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238343435303539323163613561323831663332396331343737393665 Dec 16 12:57:23.252000 audit: BPF prog-id=104 op=LOAD Dec 16 12:57:23.252000 audit[2755]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2638 pid=2755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:23.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238343435303539323163613561323831663332396331343737393665 Dec 16 12:57:23.252000 audit: BPF prog-id=104 op=UNLOAD Dec 16 12:57:23.252000 audit[2755]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2638 pid=2755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:23.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238343435303539323163613561323831663332396331343737393665 Dec 16 12:57:23.252000 audit: BPF prog-id=103 op=UNLOAD Dec 16 12:57:23.252000 audit[2755]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2638 pid=2755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:23.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238343435303539323163613561323831663332396331343737393665 Dec 16 12:57:23.252000 audit: BPF prog-id=105 op=LOAD Dec 16 12:57:23.252000 audit[2755]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2638 pid=2755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:23.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238343435303539323163613561323831663332396331343737393665 Dec 16 12:57:23.258422 systemd[1]: Started cri-containerd-358023c3139e79749fc55efca4759a35671b785655da9e8037c91fde6f0e5558.scope - libcontainer container 358023c3139e79749fc55efca4759a35671b785655da9e8037c91fde6f0e5558. Dec 16 12:57:23.267000 audit: BPF prog-id=106 op=LOAD Dec 16 12:57:23.269000 audit: BPF prog-id=107 op=LOAD Dec 16 12:57:23.269000 audit[2761]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2643 pid=2761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:23.269000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530376665643737323266653236353837343030326133383937313433 Dec 16 12:57:23.269000 audit: BPF prog-id=107 op=UNLOAD Dec 16 12:57:23.269000 audit[2761]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2643 pid=2761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:23.269000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530376665643737323266653236353837343030326133383937313433 Dec 16 12:57:23.269000 audit: BPF prog-id=108 op=LOAD Dec 16 12:57:23.269000 audit[2761]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2643 pid=2761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:23.269000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530376665643737323266653236353837343030326133383937313433 Dec 16 12:57:23.269000 audit: BPF prog-id=109 op=LOAD Dec 16 12:57:23.269000 audit[2761]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2643 pid=2761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:23.269000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530376665643737323266653236353837343030326133383937313433 Dec 16 12:57:23.269000 audit: BPF prog-id=109 op=UNLOAD Dec 16 12:57:23.269000 audit[2761]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2643 pid=2761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:23.269000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530376665643737323266653236353837343030326133383937313433 Dec 16 12:57:23.269000 audit: BPF prog-id=108 op=UNLOAD Dec 16 12:57:23.269000 audit[2761]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2643 pid=2761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:23.269000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530376665643737323266653236353837343030326133383937313433 Dec 16 12:57:23.269000 audit: BPF prog-id=110 op=LOAD Dec 16 12:57:23.269000 audit[2761]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2643 pid=2761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:23.269000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530376665643737323266653236353837343030326133383937313433 Dec 16 12:57:23.331000 audit: BPF prog-id=111 op=LOAD Dec 16 12:57:23.334000 audit: BPF prog-id=112 op=LOAD Dec 16 12:57:23.334000 audit[2785]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2645 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:23.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335383032336333313339653739373439666335356566636134373539 Dec 16 12:57:23.335000 audit: BPF prog-id=112 op=UNLOAD Dec 16 12:57:23.335000 audit[2785]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2645 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:23.335000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335383032336333313339653739373439666335356566636134373539 Dec 16 12:57:23.340000 audit: BPF prog-id=113 op=LOAD Dec 16 12:57:23.340000 audit[2785]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2645 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:23.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335383032336333313339653739373439666335356566636134373539 Dec 16 12:57:23.341000 audit: BPF prog-id=114 op=LOAD Dec 16 12:57:23.341000 audit[2785]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2645 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:23.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335383032336333313339653739373439666335356566636134373539 Dec 16 12:57:23.341000 audit: BPF prog-id=114 op=UNLOAD Dec 16 12:57:23.341000 audit[2785]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2645 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:23.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335383032336333313339653739373439666335356566636134373539 Dec 16 12:57:23.342000 audit: BPF prog-id=113 op=UNLOAD Dec 16 12:57:23.342000 audit[2785]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2645 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:23.342000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335383032336333313339653739373439666335356566636134373539 Dec 16 12:57:23.342000 audit: BPF prog-id=115 op=LOAD Dec 16 12:57:23.342000 audit[2785]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2645 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:23.342000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335383032336333313339653739373439666335356566636134373539 Dec 16 12:57:23.368117 containerd[1627]: time="2025-12-16T12:57:23.367925491Z" level=info msg="StartContainer for \"2844505921ca5a281f329c147796e55baabfb346b3567df99c2d3bec07cdb9e7\" returns successfully" Dec 16 12:57:23.390433 kubelet[2578]: E1216 12:57:23.390383 2578 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.244.27.222:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.27.222:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:57:23.390897 containerd[1627]: time="2025-12-16T12:57:23.390812225Z" level=info msg="StartContainer for \"507fed7722fe265874002a389714377d197c12ed28fc63799f0867390310042c\" returns successfully" Dec 16 12:57:23.441989 containerd[1627]: time="2025-12-16T12:57:23.441897088Z" level=info msg="StartContainer for \"358023c3139e79749fc55efca4759a35671b785655da9e8037c91fde6f0e5558\" returns successfully" Dec 16 12:57:23.453866 kubelet[2578]: E1216 12:57:23.453720 2578 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.244.27.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-gtzk5.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.27.222:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:57:23.506768 kubelet[2578]: E1216 12:57:23.506680 2578 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.27.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-gtzk5.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.27.222:6443: connect: connection refused" interval="1.6s" Dec 16 12:57:23.691464 kubelet[2578]: E1216 12:57:23.691292 2578 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.244.27.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.27.222:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:57:23.719020 kubelet[2578]: I1216 12:57:23.718681 2578 kubelet_node_status.go:75] "Attempting to register node" node="srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:23.719413 kubelet[2578]: E1216 12:57:23.719380 2578 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.27.222:6443/api/v1/nodes\": dial tcp 10.244.27.222:6443: connect: connection refused" node="srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:24.203215 kubelet[2578]: E1216 12:57:24.203165 2578 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gtzk5.gb1.brightbox.com\" not found" node="srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:24.209715 kubelet[2578]: E1216 12:57:24.209684 2578 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gtzk5.gb1.brightbox.com\" not found" node="srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:24.215617 kubelet[2578]: E1216 12:57:24.215590 2578 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gtzk5.gb1.brightbox.com\" not found" node="srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:25.219540 kubelet[2578]: E1216 12:57:25.219487 2578 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gtzk5.gb1.brightbox.com\" not found" node="srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:25.220896 kubelet[2578]: E1216 12:57:25.220155 2578 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gtzk5.gb1.brightbox.com\" not found" node="srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:25.220896 kubelet[2578]: E1216 12:57:25.220704 2578 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gtzk5.gb1.brightbox.com\" not found" node="srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:25.322453 kubelet[2578]: I1216 12:57:25.322412 2578 kubelet_node_status.go:75] "Attempting to register node" node="srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:26.221471 kubelet[2578]: E1216 12:57:26.221239 2578 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gtzk5.gb1.brightbox.com\" not found" node="srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:26.222488 kubelet[2578]: E1216 12:57:26.221826 2578 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gtzk5.gb1.brightbox.com\" not found" node="srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:26.487821 kubelet[2578]: E1216 12:57:26.487670 2578 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-gtzk5.gb1.brightbox.com\" not found" node="srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:26.488968 kubelet[2578]: I1216 12:57:26.488694 2578 kubelet_node_status.go:78] "Successfully registered node" node="srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:26.497402 kubelet[2578]: I1216 12:57:26.497373 2578 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:26.531882 kubelet[2578]: E1216 12:57:26.531153 2578 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-gtzk5.gb1.brightbox.com.1881b3748c1a5843 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-gtzk5.gb1.brightbox.com,UID:srv-gtzk5.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-gtzk5.gb1.brightbox.com,},FirstTimestamp:2025-12-16 12:57:22.054060099 +0000 UTC m=+1.178389252,LastTimestamp:2025-12-16 12:57:22.054060099 +0000 UTC m=+1.178389252,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-gtzk5.gb1.brightbox.com,}" Dec 16 12:57:26.622494 kubelet[2578]: E1216 12:57:26.622437 2578 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-gtzk5.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:26.623547 kubelet[2578]: I1216 12:57:26.622801 2578 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:26.633421 kubelet[2578]: E1216 12:57:26.633382 2578 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-gtzk5.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:26.633872 kubelet[2578]: I1216 12:57:26.633589 2578 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:26.664305 kubelet[2578]: E1216 12:57:26.664247 2578 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-gtzk5.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:27.028893 kubelet[2578]: I1216 12:57:27.028815 2578 apiserver.go:52] "Watching apiserver" Dec 16 12:57:27.095094 kubelet[2578]: I1216 12:57:27.095042 2578 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:57:28.227937 kubelet[2578]: I1216 12:57:28.227889 2578 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:28.237630 kubelet[2578]: I1216 12:57:28.237165 2578 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:57:28.436602 systemd[1]: Reload requested from client PID 2857 ('systemctl') (unit session-12.scope)... Dec 16 12:57:28.437132 systemd[1]: Reloading... Dec 16 12:57:28.600427 zram_generator::config[2905]: No configuration found. Dec 16 12:57:29.001681 systemd[1]: Reloading finished in 563 ms. Dec 16 12:57:29.035887 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:57:29.056858 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:57:29.057373 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:57:29.057488 systemd[1]: kubelet.service: Consumed 1.738s CPU time, 131.3M memory peak. Dec 16 12:57:29.064108 kernel: kauditd_printk_skb: 204 callbacks suppressed Dec 16 12:57:29.064279 kernel: audit: type=1131 audit(1765889849.056:396): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:29.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:29.067207 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:57:29.070988 kernel: audit: type=1334 audit(1765889849.067:397): prog-id=116 op=LOAD Dec 16 12:57:29.067000 audit: BPF prog-id=116 op=LOAD Dec 16 12:57:29.067000 audit: BPF prog-id=81 op=UNLOAD Dec 16 12:57:29.074025 kernel: audit: type=1334 audit(1765889849.067:398): prog-id=81 op=UNLOAD Dec 16 12:57:29.068000 audit: BPF prog-id=117 op=LOAD Dec 16 12:57:29.076982 kernel: audit: type=1334 audit(1765889849.068:399): prog-id=117 op=LOAD Dec 16 12:57:29.079981 kernel: audit: type=1334 audit(1765889849.068:400): prog-id=118 op=LOAD Dec 16 12:57:29.068000 audit: BPF prog-id=118 op=LOAD Dec 16 12:57:29.083330 kernel: audit: type=1334 audit(1765889849.068:401): prog-id=79 op=UNLOAD Dec 16 12:57:29.083410 kernel: audit: type=1334 audit(1765889849.068:402): prog-id=80 op=UNLOAD Dec 16 12:57:29.068000 audit: BPF prog-id=79 op=UNLOAD Dec 16 12:57:29.068000 audit: BPF prog-id=80 op=UNLOAD Dec 16 12:57:29.085001 kernel: audit: type=1334 audit(1765889849.070:403): prog-id=119 op=LOAD Dec 16 12:57:29.070000 audit: BPF prog-id=119 op=LOAD Dec 16 12:57:29.070000 audit: BPF prog-id=76 op=UNLOAD Dec 16 12:57:29.086708 kernel: audit: type=1334 audit(1765889849.070:404): prog-id=76 op=UNLOAD Dec 16 12:57:29.086782 kernel: audit: type=1334 audit(1765889849.070:405): prog-id=120 op=LOAD Dec 16 12:57:29.070000 audit: BPF prog-id=120 op=LOAD Dec 16 12:57:29.070000 audit: BPF prog-id=121 op=LOAD Dec 16 12:57:29.070000 audit: BPF prog-id=77 op=UNLOAD Dec 16 12:57:29.070000 audit: BPF prog-id=78 op=UNLOAD Dec 16 12:57:29.072000 audit: BPF prog-id=122 op=LOAD Dec 16 12:57:29.072000 audit: BPF prog-id=85 op=UNLOAD Dec 16 12:57:29.073000 audit: BPF prog-id=123 op=LOAD Dec 16 12:57:29.073000 audit: BPF prog-id=70 op=UNLOAD Dec 16 12:57:29.073000 audit: BPF prog-id=124 op=LOAD Dec 16 12:57:29.074000 audit: BPF prog-id=125 op=LOAD Dec 16 12:57:29.074000 audit: BPF prog-id=71 op=UNLOAD Dec 16 12:57:29.074000 audit: BPF prog-id=72 op=UNLOAD Dec 16 12:57:29.075000 audit: BPF prog-id=126 op=LOAD Dec 16 12:57:29.075000 audit: BPF prog-id=69 op=UNLOAD Dec 16 12:57:29.077000 audit: BPF prog-id=127 op=LOAD Dec 16 12:57:29.077000 audit: BPF prog-id=65 op=UNLOAD Dec 16 12:57:29.077000 audit: BPF prog-id=128 op=LOAD Dec 16 12:57:29.077000 audit: BPF prog-id=129 op=LOAD Dec 16 12:57:29.077000 audit: BPF prog-id=66 op=UNLOAD Dec 16 12:57:29.078000 audit: BPF prog-id=67 op=UNLOAD Dec 16 12:57:29.079000 audit: BPF prog-id=130 op=LOAD Dec 16 12:57:29.079000 audit: BPF prog-id=68 op=UNLOAD Dec 16 12:57:29.080000 audit: BPF prog-id=131 op=LOAD Dec 16 12:57:29.081000 audit: BPF prog-id=82 op=UNLOAD Dec 16 12:57:29.081000 audit: BPF prog-id=132 op=LOAD Dec 16 12:57:29.081000 audit: BPF prog-id=133 op=LOAD Dec 16 12:57:29.081000 audit: BPF prog-id=83 op=UNLOAD Dec 16 12:57:29.081000 audit: BPF prog-id=84 op=UNLOAD Dec 16 12:57:29.083000 audit: BPF prog-id=134 op=LOAD Dec 16 12:57:29.087000 audit: BPF prog-id=73 op=UNLOAD Dec 16 12:57:29.087000 audit: BPF prog-id=135 op=LOAD Dec 16 12:57:29.087000 audit: BPF prog-id=136 op=LOAD Dec 16 12:57:29.087000 audit: BPF prog-id=74 op=UNLOAD Dec 16 12:57:29.087000 audit: BPF prog-id=75 op=UNLOAD Dec 16 12:57:29.394834 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:57:29.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:29.401472 (kubelet)[2968]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:57:29.502230 kubelet[2968]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:57:29.502230 kubelet[2968]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:57:29.502230 kubelet[2968]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:57:29.505217 kubelet[2968]: I1216 12:57:29.505142 2968 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:57:29.530634 kubelet[2968]: I1216 12:57:29.529827 2968 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 12:57:29.530634 kubelet[2968]: I1216 12:57:29.529874 2968 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:57:29.530634 kubelet[2968]: I1216 12:57:29.530183 2968 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:57:29.535902 kubelet[2968]: I1216 12:57:29.535492 2968 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 12:57:29.541864 kubelet[2968]: I1216 12:57:29.541296 2968 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:57:29.593414 kubelet[2968]: I1216 12:57:29.593178 2968 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:57:29.599220 kubelet[2968]: I1216 12:57:29.599167 2968 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:57:29.599984 kubelet[2968]: I1216 12:57:29.599633 2968 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:57:29.599984 kubelet[2968]: I1216 12:57:29.599688 2968 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-gtzk5.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:57:29.603394 kubelet[2968]: I1216 12:57:29.603329 2968 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:57:29.603394 kubelet[2968]: I1216 12:57:29.603367 2968 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 12:57:29.603760 kubelet[2968]: I1216 12:57:29.603452 2968 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:57:29.604734 kubelet[2968]: I1216 12:57:29.603775 2968 kubelet.go:480] "Attempting to sync node with API server" Dec 16 12:57:29.604734 kubelet[2968]: I1216 12:57:29.603795 2968 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:57:29.608029 kubelet[2968]: I1216 12:57:29.605941 2968 kubelet.go:386] "Adding apiserver pod source" Dec 16 12:57:29.610017 kubelet[2968]: I1216 12:57:29.609983 2968 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:57:29.619294 kubelet[2968]: I1216 12:57:29.618993 2968 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 12:57:29.619736 kubelet[2968]: I1216 12:57:29.619704 2968 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:57:29.628867 kubelet[2968]: I1216 12:57:29.627268 2968 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:57:29.628867 kubelet[2968]: I1216 12:57:29.627329 2968 server.go:1289] "Started kubelet" Dec 16 12:57:29.636537 kubelet[2968]: I1216 12:57:29.636471 2968 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:57:29.636989 kubelet[2968]: I1216 12:57:29.636937 2968 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:57:29.637067 kubelet[2968]: I1216 12:57:29.637036 2968 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:57:29.640490 kubelet[2968]: I1216 12:57:29.640395 2968 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:57:29.645840 kubelet[2968]: I1216 12:57:29.645397 2968 server.go:317] "Adding debug handlers to kubelet server" Dec 16 12:57:29.666447 kubelet[2968]: E1216 12:57:29.666133 2968 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:57:29.671300 kubelet[2968]: I1216 12:57:29.670189 2968 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:57:29.688789 kubelet[2968]: I1216 12:57:29.687225 2968 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:57:29.688789 kubelet[2968]: I1216 12:57:29.687884 2968 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:57:29.688789 kubelet[2968]: I1216 12:57:29.688100 2968 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:57:29.700111 kubelet[2968]: I1216 12:57:29.699432 2968 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:57:29.700111 kubelet[2968]: I1216 12:57:29.699576 2968 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:57:29.713717 kubelet[2968]: I1216 12:57:29.713654 2968 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:57:29.725749 kubelet[2968]: I1216 12:57:29.725676 2968 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 12:57:29.728497 kubelet[2968]: I1216 12:57:29.728062 2968 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 12:57:29.728497 kubelet[2968]: I1216 12:57:29.728096 2968 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 12:57:29.728497 kubelet[2968]: I1216 12:57:29.728130 2968 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:57:29.728497 kubelet[2968]: I1216 12:57:29.728144 2968 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 12:57:29.728497 kubelet[2968]: E1216 12:57:29.728201 2968 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:57:29.819934 kubelet[2968]: I1216 12:57:29.819899 2968 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:57:29.820469 kubelet[2968]: I1216 12:57:29.820197 2968 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:57:29.820469 kubelet[2968]: I1216 12:57:29.820235 2968 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:57:29.820737 kubelet[2968]: I1216 12:57:29.820713 2968 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 12:57:29.820859 kubelet[2968]: I1216 12:57:29.820823 2968 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 12:57:29.821302 kubelet[2968]: I1216 12:57:29.820981 2968 policy_none.go:49] "None policy: Start" Dec 16 12:57:29.821302 kubelet[2968]: I1216 12:57:29.821014 2968 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:57:29.821302 kubelet[2968]: I1216 12:57:29.821041 2968 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:57:29.821302 kubelet[2968]: I1216 12:57:29.821201 2968 state_mem.go:75] "Updated machine memory state" Dec 16 12:57:29.828660 kubelet[2968]: E1216 12:57:29.828387 2968 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 12:57:29.836172 kubelet[2968]: E1216 12:57:29.836137 2968 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:57:29.836666 kubelet[2968]: I1216 12:57:29.836624 2968 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:57:29.838820 kubelet[2968]: I1216 12:57:29.838767 2968 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:57:29.839471 kubelet[2968]: I1216 12:57:29.839447 2968 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:57:29.845811 kubelet[2968]: E1216 12:57:29.845780 2968 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:57:29.958647 kubelet[2968]: I1216 12:57:29.958520 2968 kubelet_node_status.go:75] "Attempting to register node" node="srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:29.972930 kubelet[2968]: I1216 12:57:29.972887 2968 kubelet_node_status.go:124] "Node was previously registered" node="srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:29.973132 kubelet[2968]: I1216 12:57:29.973028 2968 kubelet_node_status.go:78] "Successfully registered node" node="srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:30.030256 kubelet[2968]: I1216 12:57:30.030193 2968 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:30.033006 kubelet[2968]: I1216 12:57:30.031473 2968 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:30.033006 kubelet[2968]: I1216 12:57:30.032137 2968 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:30.043478 kubelet[2968]: I1216 12:57:30.043226 2968 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:57:30.046675 kubelet[2968]: I1216 12:57:30.046127 2968 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:57:30.047520 kubelet[2968]: I1216 12:57:30.046945 2968 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:57:30.047520 kubelet[2968]: E1216 12:57:30.047071 2968 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-gtzk5.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:30.096135 kubelet[2968]: I1216 12:57:30.095481 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0e8c55a37e677e7fec0024e27091a053-ca-certs\") pod \"kube-apiserver-srv-gtzk5.gb1.brightbox.com\" (UID: \"0e8c55a37e677e7fec0024e27091a053\") " pod="kube-system/kube-apiserver-srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:30.096135 kubelet[2968]: I1216 12:57:30.095559 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0e8c55a37e677e7fec0024e27091a053-k8s-certs\") pod \"kube-apiserver-srv-gtzk5.gb1.brightbox.com\" (UID: \"0e8c55a37e677e7fec0024e27091a053\") " pod="kube-system/kube-apiserver-srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:30.096135 kubelet[2968]: I1216 12:57:30.095596 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0e8c55a37e677e7fec0024e27091a053-usr-share-ca-certificates\") pod \"kube-apiserver-srv-gtzk5.gb1.brightbox.com\" (UID: \"0e8c55a37e677e7fec0024e27091a053\") " pod="kube-system/kube-apiserver-srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:30.096135 kubelet[2968]: I1216 12:57:30.095637 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b63cf051b55b7d9af43465545df1004-k8s-certs\") pod \"kube-controller-manager-srv-gtzk5.gb1.brightbox.com\" (UID: \"1b63cf051b55b7d9af43465545df1004\") " pod="kube-system/kube-controller-manager-srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:30.096135 kubelet[2968]: I1216 12:57:30.095672 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fc0d2f8e385063f35b6e4d6f14df0b48-kubeconfig\") pod \"kube-scheduler-srv-gtzk5.gb1.brightbox.com\" (UID: \"fc0d2f8e385063f35b6e4d6f14df0b48\") " pod="kube-system/kube-scheduler-srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:30.096513 kubelet[2968]: I1216 12:57:30.095713 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b63cf051b55b7d9af43465545df1004-ca-certs\") pod \"kube-controller-manager-srv-gtzk5.gb1.brightbox.com\" (UID: \"1b63cf051b55b7d9af43465545df1004\") " pod="kube-system/kube-controller-manager-srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:30.096513 kubelet[2968]: I1216 12:57:30.095762 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1b63cf051b55b7d9af43465545df1004-flexvolume-dir\") pod \"kube-controller-manager-srv-gtzk5.gb1.brightbox.com\" (UID: \"1b63cf051b55b7d9af43465545df1004\") " pod="kube-system/kube-controller-manager-srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:30.096513 kubelet[2968]: I1216 12:57:30.095807 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1b63cf051b55b7d9af43465545df1004-kubeconfig\") pod \"kube-controller-manager-srv-gtzk5.gb1.brightbox.com\" (UID: \"1b63cf051b55b7d9af43465545df1004\") " pod="kube-system/kube-controller-manager-srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:30.096513 kubelet[2968]: I1216 12:57:30.095840 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b63cf051b55b7d9af43465545df1004-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-gtzk5.gb1.brightbox.com\" (UID: \"1b63cf051b55b7d9af43465545df1004\") " pod="kube-system/kube-controller-manager-srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:30.611173 kubelet[2968]: I1216 12:57:30.611049 2968 apiserver.go:52] "Watching apiserver" Dec 16 12:57:30.689036 kubelet[2968]: I1216 12:57:30.688730 2968 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:57:30.775008 kubelet[2968]: I1216 12:57:30.774759 2968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-gtzk5.gb1.brightbox.com" podStartSLOduration=0.774725723 podStartE2EDuration="774.725723ms" podCreationTimestamp="2025-12-16 12:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:57:30.774296629 +0000 UTC m=+1.356870679" watchObservedRunningTime="2025-12-16 12:57:30.774725723 +0000 UTC m=+1.357299738" Dec 16 12:57:30.776571 kubelet[2968]: I1216 12:57:30.776462 2968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-gtzk5.gb1.brightbox.com" podStartSLOduration=2.776451845 podStartE2EDuration="2.776451845s" podCreationTimestamp="2025-12-16 12:57:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:57:30.750399938 +0000 UTC m=+1.332973990" watchObservedRunningTime="2025-12-16 12:57:30.776451845 +0000 UTC m=+1.359025865" Dec 16 12:57:30.776721 kubelet[2968]: I1216 12:57:30.776379 2968 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:30.808325 kubelet[2968]: I1216 12:57:30.808224 2968 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:57:30.808325 kubelet[2968]: E1216 12:57:30.808332 2968 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-gtzk5.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-gtzk5.gb1.brightbox.com" Dec 16 12:57:30.836443 kubelet[2968]: I1216 12:57:30.836303 2968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-gtzk5.gb1.brightbox.com" podStartSLOduration=0.836282087 podStartE2EDuration="836.282087ms" podCreationTimestamp="2025-12-16 12:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:57:30.813067776 +0000 UTC m=+1.395641797" watchObservedRunningTime="2025-12-16 12:57:30.836282087 +0000 UTC m=+1.418856100" Dec 16 12:57:35.114239 kubelet[2968]: I1216 12:57:35.114100 2968 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 12:57:35.115918 containerd[1627]: time="2025-12-16T12:57:35.115801479Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 12:57:35.117180 kubelet[2968]: I1216 12:57:35.116310 2968 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 12:57:35.717516 systemd[1]: Created slice kubepods-besteffort-pod490f93ea_71d2_4ebc_a6c3_f62a70377883.slice - libcontainer container kubepods-besteffort-pod490f93ea_71d2_4ebc_a6c3_f62a70377883.slice. Dec 16 12:57:35.735789 kubelet[2968]: I1216 12:57:35.735724 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/490f93ea-71d2-4ebc-a6c3-f62a70377883-xtables-lock\") pod \"kube-proxy-967xf\" (UID: \"490f93ea-71d2-4ebc-a6c3-f62a70377883\") " pod="kube-system/kube-proxy-967xf" Dec 16 12:57:35.735789 kubelet[2968]: I1216 12:57:35.735782 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sl88\" (UniqueName: \"kubernetes.io/projected/490f93ea-71d2-4ebc-a6c3-f62a70377883-kube-api-access-7sl88\") pod \"kube-proxy-967xf\" (UID: \"490f93ea-71d2-4ebc-a6c3-f62a70377883\") " pod="kube-system/kube-proxy-967xf" Dec 16 12:57:35.736372 kubelet[2968]: I1216 12:57:35.735815 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/490f93ea-71d2-4ebc-a6c3-f62a70377883-kube-proxy\") pod \"kube-proxy-967xf\" (UID: \"490f93ea-71d2-4ebc-a6c3-f62a70377883\") " pod="kube-system/kube-proxy-967xf" Dec 16 12:57:35.736372 kubelet[2968]: I1216 12:57:35.735840 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/490f93ea-71d2-4ebc-a6c3-f62a70377883-lib-modules\") pod \"kube-proxy-967xf\" (UID: \"490f93ea-71d2-4ebc-a6c3-f62a70377883\") " pod="kube-system/kube-proxy-967xf" Dec 16 12:57:35.851274 kubelet[2968]: E1216 12:57:35.851176 2968 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 16 12:57:35.851664 kubelet[2968]: E1216 12:57:35.851495 2968 projected.go:194] Error preparing data for projected volume kube-api-access-7sl88 for pod kube-system/kube-proxy-967xf: configmap "kube-root-ca.crt" not found Dec 16 12:57:35.852022 kubelet[2968]: E1216 12:57:35.851928 2968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/490f93ea-71d2-4ebc-a6c3-f62a70377883-kube-api-access-7sl88 podName:490f93ea-71d2-4ebc-a6c3-f62a70377883 nodeName:}" failed. No retries permitted until 2025-12-16 12:57:36.351892123 +0000 UTC m=+6.934466130 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7sl88" (UniqueName: "kubernetes.io/projected/490f93ea-71d2-4ebc-a6c3-f62a70377883-kube-api-access-7sl88") pod "kube-proxy-967xf" (UID: "490f93ea-71d2-4ebc-a6c3-f62a70377883") : configmap "kube-root-ca.crt" not found Dec 16 12:57:36.285495 systemd[1]: Created slice kubepods-besteffort-pod00ff9b9d_b496_4cef_a4fe_4f2352e88ebc.slice - libcontainer container kubepods-besteffort-pod00ff9b9d_b496_4cef_a4fe_4f2352e88ebc.slice. Dec 16 12:57:36.339893 kubelet[2968]: I1216 12:57:36.339716 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wtnc\" (UniqueName: \"kubernetes.io/projected/00ff9b9d-b496-4cef-a4fe-4f2352e88ebc-kube-api-access-9wtnc\") pod \"tigera-operator-7dcd859c48-wspgk\" (UID: \"00ff9b9d-b496-4cef-a4fe-4f2352e88ebc\") " pod="tigera-operator/tigera-operator-7dcd859c48-wspgk" Dec 16 12:57:36.339893 kubelet[2968]: I1216 12:57:36.339799 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/00ff9b9d-b496-4cef-a4fe-4f2352e88ebc-var-lib-calico\") pod \"tigera-operator-7dcd859c48-wspgk\" (UID: \"00ff9b9d-b496-4cef-a4fe-4f2352e88ebc\") " pod="tigera-operator/tigera-operator-7dcd859c48-wspgk" Dec 16 12:57:36.591735 containerd[1627]: time="2025-12-16T12:57:36.591361992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-wspgk,Uid:00ff9b9d-b496-4cef-a4fe-4f2352e88ebc,Namespace:tigera-operator,Attempt:0,}" Dec 16 12:57:36.630640 containerd[1627]: time="2025-12-16T12:57:36.630425300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-967xf,Uid:490f93ea-71d2-4ebc-a6c3-f62a70377883,Namespace:kube-system,Attempt:0,}" Dec 16 12:57:36.641429 containerd[1627]: time="2025-12-16T12:57:36.640300690Z" level=info msg="connecting to shim 8559cf46ae0403bd48b6e34ae51072a97677c5b5ffdfb9f454d3c9d2201287f2" address="unix:///run/containerd/s/bee17eb8ca5fcef576f9af812f37e3b165c584c08736539a48c8f43ac35b6e6e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:57:36.700143 containerd[1627]: time="2025-12-16T12:57:36.698872745Z" level=info msg="connecting to shim de56d717135331732759ac43ff5dc11aa0a266898ba6b1848a387e282a8c996d" address="unix:///run/containerd/s/ea80237878cc3b958cbc54ff8cfc6fe1ca8ca638133f930c89655ed3a0d498b8" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:57:36.704362 systemd[1]: Started cri-containerd-8559cf46ae0403bd48b6e34ae51072a97677c5b5ffdfb9f454d3c9d2201287f2.scope - libcontainer container 8559cf46ae0403bd48b6e34ae51072a97677c5b5ffdfb9f454d3c9d2201287f2. Dec 16 12:57:36.737000 audit: BPF prog-id=137 op=LOAD Dec 16 12:57:36.741847 kernel: kauditd_printk_skb: 34 callbacks suppressed Dec 16 12:57:36.742050 kernel: audit: type=1334 audit(1765889856.737:440): prog-id=137 op=LOAD Dec 16 12:57:36.742000 audit: BPF prog-id=138 op=LOAD Dec 16 12:57:36.745975 kernel: audit: type=1334 audit(1765889856.742:441): prog-id=138 op=LOAD Dec 16 12:57:36.742000 audit[3042]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3030 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:36.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835353963663436616530343033626434386236653334616535313037 Dec 16 12:57:36.754401 kernel: audit: type=1300 audit(1765889856.742:441): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3030 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:36.754532 kernel: audit: type=1327 audit(1765889856.742:441): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835353963663436616530343033626434386236653334616535313037 Dec 16 12:57:36.743000 audit: BPF prog-id=138 op=UNLOAD Dec 16 12:57:36.743000 audit[3042]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3030 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:36.761758 kernel: audit: type=1334 audit(1765889856.743:442): prog-id=138 op=UNLOAD Dec 16 12:57:36.761809 kernel: audit: type=1300 audit(1765889856.743:442): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3030 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:36.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835353963663436616530343033626434386236653334616535313037 Dec 16 12:57:36.743000 audit: BPF prog-id=139 op=LOAD Dec 16 12:57:36.772002 kernel: audit: type=1327 audit(1765889856.743:442): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835353963663436616530343033626434386236653334616535313037 Dec 16 12:57:36.772105 kernel: audit: type=1334 audit(1765889856.743:443): prog-id=139 op=LOAD Dec 16 12:57:36.743000 audit[3042]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3030 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:36.775316 systemd[1]: Started cri-containerd-de56d717135331732759ac43ff5dc11aa0a266898ba6b1848a387e282a8c996d.scope - libcontainer container de56d717135331732759ac43ff5dc11aa0a266898ba6b1848a387e282a8c996d. Dec 16 12:57:36.778980 kernel: audit: type=1300 audit(1765889856.743:443): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3030 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:36.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835353963663436616530343033626434386236653334616535313037 Dec 16 12:57:36.785980 kernel: audit: type=1327 audit(1765889856.743:443): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835353963663436616530343033626434386236653334616535313037 Dec 16 12:57:36.743000 audit: BPF prog-id=140 op=LOAD Dec 16 12:57:36.743000 audit[3042]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3030 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:36.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835353963663436616530343033626434386236653334616535313037 Dec 16 12:57:36.743000 audit: BPF prog-id=140 op=UNLOAD Dec 16 12:57:36.743000 audit[3042]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3030 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:36.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835353963663436616530343033626434386236653334616535313037 Dec 16 12:57:36.743000 audit: BPF prog-id=139 op=UNLOAD Dec 16 12:57:36.743000 audit[3042]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3030 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:36.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835353963663436616530343033626434386236653334616535313037 Dec 16 12:57:36.743000 audit: BPF prog-id=141 op=LOAD Dec 16 12:57:36.743000 audit[3042]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3030 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:36.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835353963663436616530343033626434386236653334616535313037 Dec 16 12:57:36.832000 audit: BPF prog-id=142 op=LOAD Dec 16 12:57:36.834000 audit: BPF prog-id=143 op=LOAD Dec 16 12:57:36.834000 audit[3082]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3062 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:36.834000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465353664373137313335333331373332373539616334336666356463 Dec 16 12:57:36.834000 audit: BPF prog-id=143 op=UNLOAD Dec 16 12:57:36.834000 audit[3082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3062 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:36.834000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465353664373137313335333331373332373539616334336666356463 Dec 16 12:57:36.836000 audit: BPF prog-id=144 op=LOAD Dec 16 12:57:36.836000 audit[3082]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3062 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:36.836000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465353664373137313335333331373332373539616334336666356463 Dec 16 12:57:36.837000 audit: BPF prog-id=145 op=LOAD Dec 16 12:57:36.837000 audit[3082]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3062 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:36.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465353664373137313335333331373332373539616334336666356463 Dec 16 12:57:36.837000 audit: BPF prog-id=145 op=UNLOAD Dec 16 12:57:36.837000 audit[3082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3062 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:36.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465353664373137313335333331373332373539616334336666356463 Dec 16 12:57:36.838000 audit: BPF prog-id=144 op=UNLOAD Dec 16 12:57:36.838000 audit[3082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3062 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:36.838000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465353664373137313335333331373332373539616334336666356463 Dec 16 12:57:36.838000 audit: BPF prog-id=146 op=LOAD Dec 16 12:57:36.838000 audit[3082]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3062 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:36.838000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465353664373137313335333331373332373539616334336666356463 Dec 16 12:57:36.871788 containerd[1627]: time="2025-12-16T12:57:36.870868831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-wspgk,Uid:00ff9b9d-b496-4cef-a4fe-4f2352e88ebc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8559cf46ae0403bd48b6e34ae51072a97677c5b5ffdfb9f454d3c9d2201287f2\"" Dec 16 12:57:36.879568 containerd[1627]: time="2025-12-16T12:57:36.879438742Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 12:57:36.882022 containerd[1627]: time="2025-12-16T12:57:36.881945262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-967xf,Uid:490f93ea-71d2-4ebc-a6c3-f62a70377883,Namespace:kube-system,Attempt:0,} returns sandbox id \"de56d717135331732759ac43ff5dc11aa0a266898ba6b1848a387e282a8c996d\"" Dec 16 12:57:36.903121 containerd[1627]: time="2025-12-16T12:57:36.903041170Z" level=info msg="CreateContainer within sandbox \"de56d717135331732759ac43ff5dc11aa0a266898ba6b1848a387e282a8c996d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 12:57:36.922892 containerd[1627]: time="2025-12-16T12:57:36.922836982Z" level=info msg="Container 75d156bd337ff54b763a638e72de3aec195d7374a43ad0e04b12c43a5c63576d: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:57:36.936560 containerd[1627]: time="2025-12-16T12:57:36.936398462Z" level=info msg="CreateContainer within sandbox \"de56d717135331732759ac43ff5dc11aa0a266898ba6b1848a387e282a8c996d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"75d156bd337ff54b763a638e72de3aec195d7374a43ad0e04b12c43a5c63576d\"" Dec 16 12:57:36.938559 containerd[1627]: time="2025-12-16T12:57:36.938517351Z" level=info msg="StartContainer for \"75d156bd337ff54b763a638e72de3aec195d7374a43ad0e04b12c43a5c63576d\"" Dec 16 12:57:36.941170 containerd[1627]: time="2025-12-16T12:57:36.941121294Z" level=info msg="connecting to shim 75d156bd337ff54b763a638e72de3aec195d7374a43ad0e04b12c43a5c63576d" address="unix:///run/containerd/s/ea80237878cc3b958cbc54ff8cfc6fe1ca8ca638133f930c89655ed3a0d498b8" protocol=ttrpc version=3 Dec 16 12:57:36.974323 systemd[1]: Started cri-containerd-75d156bd337ff54b763a638e72de3aec195d7374a43ad0e04b12c43a5c63576d.scope - libcontainer container 75d156bd337ff54b763a638e72de3aec195d7374a43ad0e04b12c43a5c63576d. Dec 16 12:57:37.048000 audit: BPF prog-id=147 op=LOAD Dec 16 12:57:37.048000 audit[3115]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3062 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.048000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735643135366264333337666635346237363361363338653732646533 Dec 16 12:57:37.048000 audit: BPF prog-id=148 op=LOAD Dec 16 12:57:37.048000 audit[3115]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3062 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.048000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735643135366264333337666635346237363361363338653732646533 Dec 16 12:57:37.048000 audit: BPF prog-id=148 op=UNLOAD Dec 16 12:57:37.048000 audit[3115]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3062 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.048000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735643135366264333337666635346237363361363338653732646533 Dec 16 12:57:37.048000 audit: BPF prog-id=147 op=UNLOAD Dec 16 12:57:37.048000 audit[3115]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3062 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.048000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735643135366264333337666635346237363361363338653732646533 Dec 16 12:57:37.048000 audit: BPF prog-id=149 op=LOAD Dec 16 12:57:37.048000 audit[3115]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3062 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.048000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735643135366264333337666635346237363361363338653732646533 Dec 16 12:57:37.098127 containerd[1627]: time="2025-12-16T12:57:37.098071442Z" level=info msg="StartContainer for \"75d156bd337ff54b763a638e72de3aec195d7374a43ad0e04b12c43a5c63576d\" returns successfully" Dec 16 12:57:37.576000 audit[3181]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3181 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:37.576000 audit[3181]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcbd7d2060 a2=0 a3=7ffcbd7d204c items=0 ppid=3129 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.576000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 12:57:37.578000 audit[3183]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3183 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:37.578000 audit[3183]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd2cd9e70 a2=0 a3=7ffcd2cd9e5c items=0 ppid=3129 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.578000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 12:57:37.580000 audit[3184]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_chain pid=3184 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:37.580000 audit[3184]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff84df4840 a2=0 a3=7fff84df482c items=0 ppid=3129 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.580000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 12:57:37.582000 audit[3186]: NETFILTER_CFG table=mangle:57 family=10 entries=1 op=nft_register_chain pid=3186 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:37.582000 audit[3186]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeb9243480 a2=0 a3=7ffeb924346c items=0 ppid=3129 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.582000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 12:57:37.584000 audit[3188]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3188 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:37.584000 audit[3188]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff40e947c0 a2=0 a3=7fff40e947ac items=0 ppid=3129 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.584000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 12:57:37.587000 audit[3189]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3189 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:37.587000 audit[3189]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcc365fbe0 a2=0 a3=7ffcc365fbcc items=0 ppid=3129 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.587000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 12:57:37.693000 audit[3190]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3190 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:37.693000 audit[3190]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc0c711db0 a2=0 a3=7ffc0c711d9c items=0 ppid=3129 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.693000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 12:57:37.699000 audit[3192]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3192 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:37.699000 audit[3192]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe230cf540 a2=0 a3=7ffe230cf52c items=0 ppid=3129 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.699000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 16 12:57:37.706000 audit[3195]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3195 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:37.706000 audit[3195]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe36d60490 a2=0 a3=7ffe36d6047c items=0 ppid=3129 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.706000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 16 12:57:37.708000 audit[3196]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3196 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:37.708000 audit[3196]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffea4dd8870 a2=0 a3=7ffea4dd885c items=0 ppid=3129 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.708000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 12:57:37.712000 audit[3198]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3198 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:37.712000 audit[3198]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd78df2f40 a2=0 a3=7ffd78df2f2c items=0 ppid=3129 pid=3198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.712000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 12:57:37.714000 audit[3199]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3199 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:37.714000 audit[3199]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcfeb4f120 a2=0 a3=7ffcfeb4f10c items=0 ppid=3129 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.714000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 12:57:37.719000 audit[3201]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3201 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:37.719000 audit[3201]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff6518ac20 a2=0 a3=7fff6518ac0c items=0 ppid=3129 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.719000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 12:57:37.725000 audit[3204]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3204 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:37.725000 audit[3204]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffce21270e0 a2=0 a3=7ffce21270cc items=0 ppid=3129 pid=3204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.725000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 16 12:57:37.727000 audit[3205]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3205 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:37.727000 audit[3205]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffded7ea5c0 a2=0 a3=7ffded7ea5ac items=0 ppid=3129 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.727000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 12:57:37.731000 audit[3207]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3207 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:37.731000 audit[3207]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc5735bff0 a2=0 a3=7ffc5735bfdc items=0 ppid=3129 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.731000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 12:57:37.734000 audit[3208]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3208 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:37.734000 audit[3208]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdf523db00 a2=0 a3=7ffdf523daec items=0 ppid=3129 pid=3208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.734000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 12:57:37.738000 audit[3210]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3210 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:37.738000 audit[3210]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc5a61ae90 a2=0 a3=7ffc5a61ae7c items=0 ppid=3129 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.738000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 12:57:37.745000 audit[3213]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3213 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:37.745000 audit[3213]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcdd797a50 a2=0 a3=7ffcdd797a3c items=0 ppid=3129 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.745000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 12:57:37.756000 audit[3216]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3216 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:37.756000 audit[3216]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffece2ffb70 a2=0 a3=7ffece2ffb5c items=0 ppid=3129 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.756000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 12:57:37.758000 audit[3217]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3217 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:37.758000 audit[3217]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffaaa05880 a2=0 a3=7fffaaa0586c items=0 ppid=3129 pid=3217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.758000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 12:57:37.763000 audit[3219]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3219 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:37.763000 audit[3219]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc570e0b30 a2=0 a3=7ffc570e0b1c items=0 ppid=3129 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.763000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:57:37.769000 audit[3222]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3222 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:37.769000 audit[3222]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe85d30a60 a2=0 a3=7ffe85d30a4c items=0 ppid=3129 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.769000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:57:37.771000 audit[3223]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3223 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:37.771000 audit[3223]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffed439c800 a2=0 a3=7ffed439c7ec items=0 ppid=3129 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.771000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 12:57:37.776000 audit[3225]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3225 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:37.776000 audit[3225]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fff4c210830 a2=0 a3=7fff4c21081c items=0 ppid=3129 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.776000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 12:57:37.829000 audit[3231]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:57:37.829000 audit[3231]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe9daeecc0 a2=0 a3=7ffe9daeecac items=0 ppid=3129 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.829000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:57:37.839000 audit[3231]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:57:37.839000 audit[3231]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffe9daeecc0 a2=0 a3=7ffe9daeecac items=0 ppid=3129 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.839000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:57:37.842000 audit[3236]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3236 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:37.842000 audit[3236]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffdd81ee270 a2=0 a3=7ffdd81ee25c items=0 ppid=3129 pid=3236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.842000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 12:57:37.847000 audit[3238]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3238 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:37.847000 audit[3238]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd3b0a1520 a2=0 a3=7ffd3b0a150c items=0 ppid=3129 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.847000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 16 12:57:37.855000 audit[3241]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3241 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:37.855000 audit[3241]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc788de780 a2=0 a3=7ffc788de76c items=0 ppid=3129 pid=3241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.855000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 16 12:57:37.857000 audit[3242]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3242 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:37.857000 audit[3242]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9b2287a0 a2=0 a3=7fff9b22878c items=0 ppid=3129 pid=3242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.857000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 12:57:37.861000 audit[3244]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3244 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:37.861000 audit[3244]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd81be8aa0 a2=0 a3=7ffd81be8a8c items=0 ppid=3129 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.861000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 12:57:37.863000 audit[3245]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3245 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:37.863000 audit[3245]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8399d150 a2=0 a3=7ffc8399d13c items=0 ppid=3129 pid=3245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.863000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 12:57:37.867000 audit[3247]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3247 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:37.867000 audit[3247]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc447d1960 a2=0 a3=7ffc447d194c items=0 ppid=3129 pid=3247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.867000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 16 12:57:37.874000 audit[3250]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3250 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:37.874000 audit[3250]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd5b9fdff0 a2=0 a3=7ffd5b9fdfdc items=0 ppid=3129 pid=3250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.874000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 12:57:37.876000 audit[3251]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3251 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:37.876000 audit[3251]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe0d30d400 a2=0 a3=7ffe0d30d3ec items=0 ppid=3129 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.876000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 12:57:37.880000 audit[3253]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3253 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:37.880000 audit[3253]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd5cb05630 a2=0 a3=7ffd5cb0561c items=0 ppid=3129 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.880000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 12:57:37.882000 audit[3254]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3254 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:37.882000 audit[3254]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdb8725680 a2=0 a3=7ffdb872566c items=0 ppid=3129 pid=3254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.882000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 12:57:37.887000 audit[3256]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3256 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:37.887000 audit[3256]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe6b04b3a0 a2=0 a3=7ffe6b04b38c items=0 ppid=3129 pid=3256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.887000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 12:57:37.893000 audit[3259]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3259 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:37.893000 audit[3259]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc6b9563d0 a2=0 a3=7ffc6b9563bc items=0 ppid=3129 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.893000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 12:57:37.899000 audit[3262]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3262 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:37.899000 audit[3262]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff20d00eb0 a2=0 a3=7fff20d00e9c items=0 ppid=3129 pid=3262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.899000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 16 12:57:37.901000 audit[3263]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3263 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:37.901000 audit[3263]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff06e996d0 a2=0 a3=7fff06e996bc items=0 ppid=3129 pid=3263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.901000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 12:57:37.905000 audit[3265]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3265 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:37.905000 audit[3265]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff06521940 a2=0 a3=7fff0652192c items=0 ppid=3129 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.905000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:57:37.911000 audit[3268]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3268 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:37.911000 audit[3268]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff486400c0 a2=0 a3=7fff486400ac items=0 ppid=3129 pid=3268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.911000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:57:37.913000 audit[3269]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3269 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:37.913000 audit[3269]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb449b250 a2=0 a3=7fffb449b23c items=0 ppid=3129 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.913000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 12:57:37.917000 audit[3271]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3271 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:37.917000 audit[3271]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffeca1452b0 a2=0 a3=7ffeca14529c items=0 ppid=3129 pid=3271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.917000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 12:57:37.919000 audit[3272]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3272 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:37.919000 audit[3272]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff83ffadf0 a2=0 a3=7fff83ffaddc items=0 ppid=3129 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.919000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 12:57:37.923000 audit[3274]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3274 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:37.923000 audit[3274]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcff6a31f0 a2=0 a3=7ffcff6a31dc items=0 ppid=3129 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.923000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:57:37.929000 audit[3277]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3277 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:37.929000 audit[3277]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd7bae8fa0 a2=0 a3=7ffd7bae8f8c items=0 ppid=3129 pid=3277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.929000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:57:37.936000 audit[3279]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3279 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 12:57:37.936000 audit[3279]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffd3ecf3610 a2=0 a3=7ffd3ecf35fc items=0 ppid=3129 pid=3279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.936000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:57:37.937000 audit[3279]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3279 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 12:57:37.937000 audit[3279]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffd3ecf3610 a2=0 a3=7ffd3ecf35fc items=0 ppid=3129 pid=3279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:37.937000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:57:39.968164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2716684611.mount: Deactivated successfully. Dec 16 12:57:41.457392 containerd[1627]: time="2025-12-16T12:57:41.457315623Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:41.460422 containerd[1627]: time="2025-12-16T12:57:41.460303316Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25052948" Dec 16 12:57:41.461978 containerd[1627]: time="2025-12-16T12:57:41.461822098Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:41.466451 containerd[1627]: time="2025-12-16T12:57:41.466392836Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:41.467945 containerd[1627]: time="2025-12-16T12:57:41.467786758Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 4.587545996s" Dec 16 12:57:41.467945 containerd[1627]: time="2025-12-16T12:57:41.467835840Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 16 12:57:41.484906 containerd[1627]: time="2025-12-16T12:57:41.484827847Z" level=info msg="CreateContainer within sandbox \"8559cf46ae0403bd48b6e34ae51072a97677c5b5ffdfb9f454d3c9d2201287f2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 12:57:41.518007 containerd[1627]: time="2025-12-16T12:57:41.517933004Z" level=info msg="Container bde6ae53eaad51e8880c9544b11f123b2dfe7a3fe41026917262408902bfd978: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:57:41.525005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2293401925.mount: Deactivated successfully. Dec 16 12:57:41.536126 containerd[1627]: time="2025-12-16T12:57:41.536038703Z" level=info msg="CreateContainer within sandbox \"8559cf46ae0403bd48b6e34ae51072a97677c5b5ffdfb9f454d3c9d2201287f2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bde6ae53eaad51e8880c9544b11f123b2dfe7a3fe41026917262408902bfd978\"" Dec 16 12:57:41.540197 containerd[1627]: time="2025-12-16T12:57:41.540057995Z" level=info msg="StartContainer for \"bde6ae53eaad51e8880c9544b11f123b2dfe7a3fe41026917262408902bfd978\"" Dec 16 12:57:41.544595 containerd[1627]: time="2025-12-16T12:57:41.544531818Z" level=info msg="connecting to shim bde6ae53eaad51e8880c9544b11f123b2dfe7a3fe41026917262408902bfd978" address="unix:///run/containerd/s/bee17eb8ca5fcef576f9af812f37e3b165c584c08736539a48c8f43ac35b6e6e" protocol=ttrpc version=3 Dec 16 12:57:41.589387 kubelet[2968]: I1216 12:57:41.588995 2968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-967xf" podStartSLOduration=6.584138353 podStartE2EDuration="6.584138353s" podCreationTimestamp="2025-12-16 12:57:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:57:37.815610602 +0000 UTC m=+8.398184655" watchObservedRunningTime="2025-12-16 12:57:41.584138353 +0000 UTC m=+12.166712360" Dec 16 12:57:41.596594 systemd[1]: Started cri-containerd-bde6ae53eaad51e8880c9544b11f123b2dfe7a3fe41026917262408902bfd978.scope - libcontainer container bde6ae53eaad51e8880c9544b11f123b2dfe7a3fe41026917262408902bfd978. Dec 16 12:57:41.672000 audit: BPF prog-id=150 op=LOAD Dec 16 12:57:41.672000 audit: BPF prog-id=151 op=LOAD Dec 16 12:57:41.672000 audit[3288]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=3030 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:41.672000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264653661653533656161643531653838383063393534346231316631 Dec 16 12:57:41.673000 audit: BPF prog-id=151 op=UNLOAD Dec 16 12:57:41.673000 audit[3288]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3030 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:41.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264653661653533656161643531653838383063393534346231316631 Dec 16 12:57:41.673000 audit: BPF prog-id=152 op=LOAD Dec 16 12:57:41.673000 audit[3288]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3030 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:41.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264653661653533656161643531653838383063393534346231316631 Dec 16 12:57:41.673000 audit: BPF prog-id=153 op=LOAD Dec 16 12:57:41.673000 audit[3288]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=3030 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:41.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264653661653533656161643531653838383063393534346231316631 Dec 16 12:57:41.673000 audit: BPF prog-id=153 op=UNLOAD Dec 16 12:57:41.673000 audit[3288]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3030 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:41.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264653661653533656161643531653838383063393534346231316631 Dec 16 12:57:41.673000 audit: BPF prog-id=152 op=UNLOAD Dec 16 12:57:41.673000 audit[3288]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3030 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:41.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264653661653533656161643531653838383063393534346231316631 Dec 16 12:57:41.673000 audit: BPF prog-id=154 op=LOAD Dec 16 12:57:41.673000 audit[3288]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=3030 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:41.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264653661653533656161643531653838383063393534346231316631 Dec 16 12:57:41.704453 containerd[1627]: time="2025-12-16T12:57:41.704363333Z" level=info msg="StartContainer for \"bde6ae53eaad51e8880c9544b11f123b2dfe7a3fe41026917262408902bfd978\" returns successfully" Dec 16 12:57:47.985654 sudo[1963]: pam_unix(sudo:session): session closed for user root Dec 16 12:57:47.997114 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 16 12:57:47.997312 kernel: audit: type=1106 audit(1765889867.985:520): pid=1963 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:57:47.985000 audit[1963]: USER_END pid=1963 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:57:47.989000 audit[1963]: CRED_DISP pid=1963 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:57:48.003982 kernel: audit: type=1104 audit(1765889867.989:521): pid=1963 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:57:48.149895 sshd[1962]: Connection closed by 139.178.68.195 port 55652 Dec 16 12:57:48.152856 sshd-session[1943]: pam_unix(sshd:session): session closed for user core Dec 16 12:57:48.157000 audit[1943]: USER_END pid=1943 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:57:48.164983 kernel: audit: type=1106 audit(1765889868.157:522): pid=1943 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:57:48.165912 systemd[1]: sshd@8-10.244.27.222:22-139.178.68.195:55652.service: Deactivated successfully. Dec 16 12:57:48.173569 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 12:57:48.157000 audit[1943]: CRED_DISP pid=1943 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:57:48.178300 systemd[1]: session-12.scope: Consumed 7.548s CPU time, 152.5M memory peak. Dec 16 12:57:48.180977 kernel: audit: type=1104 audit(1765889868.157:523): pid=1943 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:57:48.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.244.27.222:22-139.178.68.195:55652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:48.192682 kernel: audit: type=1131 audit(1765889868.165:524): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.244.27.222:22-139.178.68.195:55652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:48.192035 systemd-logind[1597]: Session 12 logged out. Waiting for processes to exit. Dec 16 12:57:48.197295 systemd-logind[1597]: Removed session 12. Dec 16 12:57:48.983000 audit[3366]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3366 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:57:48.991982 kernel: audit: type=1325 audit(1765889868.983:525): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3366 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:57:48.983000 audit[3366]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcb3ec39d0 a2=0 a3=7ffcb3ec39bc items=0 ppid=3129 pid=3366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:49.001043 kernel: audit: type=1300 audit(1765889868.983:525): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcb3ec39d0 a2=0 a3=7ffcb3ec39bc items=0 ppid=3129 pid=3366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:48.983000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:57:49.009979 kernel: audit: type=1327 audit(1765889868.983:525): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:57:49.004000 audit[3366]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3366 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:57:49.004000 audit[3366]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcb3ec39d0 a2=0 a3=0 items=0 ppid=3129 pid=3366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:49.017899 kernel: audit: type=1325 audit(1765889869.004:526): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3366 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:57:49.018024 kernel: audit: type=1300 audit(1765889869.004:526): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcb3ec39d0 a2=0 a3=0 items=0 ppid=3129 pid=3366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:49.004000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:57:49.041000 audit[3368]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3368 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:57:49.041000 audit[3368]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe24934150 a2=0 a3=7ffe2493413c items=0 ppid=3129 pid=3368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:49.041000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:57:49.046000 audit[3368]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3368 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:57:49.046000 audit[3368]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe24934150 a2=0 a3=0 items=0 ppid=3129 pid=3368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:49.046000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:57:52.814000 audit[3373]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3373 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:57:52.814000 audit[3373]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffce8215580 a2=0 a3=7ffce821556c items=0 ppid=3129 pid=3373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:52.814000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:57:52.819000 audit[3373]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3373 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:57:52.819000 audit[3373]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffce8215580 a2=0 a3=0 items=0 ppid=3129 pid=3373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:52.819000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:57:52.885000 audit[3375]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3375 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:57:52.885000 audit[3375]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffee4213ed0 a2=0 a3=7ffee4213ebc items=0 ppid=3129 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:52.885000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:57:52.891000 audit[3375]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3375 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:57:52.891000 audit[3375]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffee4213ed0 a2=0 a3=0 items=0 ppid=3129 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:52.891000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:57:53.918000 audit[3377]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3377 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:57:53.925133 kernel: kauditd_printk_skb: 19 callbacks suppressed Dec 16 12:57:53.925203 kernel: audit: type=1325 audit(1765889873.918:533): table=filter:113 family=2 entries=19 op=nft_register_rule pid=3377 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:57:53.918000 audit[3377]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffbea8d260 a2=0 a3=7fffbea8d24c items=0 ppid=3129 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:53.929937 kernel: audit: type=1300 audit(1765889873.918:533): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffbea8d260 a2=0 a3=7fffbea8d24c items=0 ppid=3129 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:53.918000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:57:53.944732 kernel: audit: type=1327 audit(1765889873.918:533): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:57:53.933000 audit[3377]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3377 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:57:53.951989 kernel: audit: type=1325 audit(1765889873.933:534): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3377 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:57:53.933000 audit[3377]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffbea8d260 a2=0 a3=0 items=0 ppid=3129 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:53.958983 kernel: audit: type=1300 audit(1765889873.933:534): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffbea8d260 a2=0 a3=0 items=0 ppid=3129 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:53.933000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:57:53.964005 kernel: audit: type=1327 audit(1765889873.933:534): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:57:54.954203 kubelet[2968]: I1216 12:57:54.954056 2968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-wspgk" podStartSLOduration=14.36281747 podStartE2EDuration="18.953997509s" podCreationTimestamp="2025-12-16 12:57:36 +0000 UTC" firstStartedPulling="2025-12-16 12:57:36.877985459 +0000 UTC m=+7.460559461" lastFinishedPulling="2025-12-16 12:57:41.469165496 +0000 UTC m=+12.051739500" observedRunningTime="2025-12-16 12:57:41.828877234 +0000 UTC m=+12.411451283" watchObservedRunningTime="2025-12-16 12:57:54.953997509 +0000 UTC m=+25.536571529" Dec 16 12:57:54.969846 systemd[1]: Created slice kubepods-besteffort-poda0898314_15f9_4660_907b_20ecb9ae18cc.slice - libcontainer container kubepods-besteffort-poda0898314_15f9_4660_907b_20ecb9ae18cc.slice. Dec 16 12:57:54.965000 audit[3379]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3379 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:57:55.008385 kernel: audit: type=1325 audit(1765889874.965:535): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3379 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:57:54.965000 audit[3379]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff69e48e10 a2=0 a3=7fff69e48dfc items=0 ppid=3129 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:55.015992 kernel: audit: type=1300 audit(1765889874.965:535): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff69e48e10 a2=0 a3=7fff69e48dfc items=0 ppid=3129 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:54.965000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:57:55.025020 kernel: audit: type=1327 audit(1765889874.965:535): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:57:55.028000 audit[3379]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3379 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:57:55.032992 kernel: audit: type=1325 audit(1765889875.028:536): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3379 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:57:55.028000 audit[3379]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff69e48e10 a2=0 a3=0 items=0 ppid=3129 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:55.028000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:57:55.071525 kubelet[2968]: I1216 12:57:55.071450 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx5sf\" (UniqueName: \"kubernetes.io/projected/a0898314-15f9-4660-907b-20ecb9ae18cc-kube-api-access-xx5sf\") pod \"calico-typha-7cb88554d-2l4qk\" (UID: \"a0898314-15f9-4660-907b-20ecb9ae18cc\") " pod="calico-system/calico-typha-7cb88554d-2l4qk" Dec 16 12:57:55.071525 kubelet[2968]: I1216 12:57:55.071530 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0898314-15f9-4660-907b-20ecb9ae18cc-tigera-ca-bundle\") pod \"calico-typha-7cb88554d-2l4qk\" (UID: \"a0898314-15f9-4660-907b-20ecb9ae18cc\") " pod="calico-system/calico-typha-7cb88554d-2l4qk" Dec 16 12:57:55.071733 kubelet[2968]: I1216 12:57:55.071569 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a0898314-15f9-4660-907b-20ecb9ae18cc-typha-certs\") pod \"calico-typha-7cb88554d-2l4qk\" (UID: \"a0898314-15f9-4660-907b-20ecb9ae18cc\") " pod="calico-system/calico-typha-7cb88554d-2l4qk" Dec 16 12:57:55.134925 systemd[1]: Created slice kubepods-besteffort-podf90d7033_1293_4318_a198_1af521fd1be3.slice - libcontainer container kubepods-besteffort-podf90d7033_1293_4318_a198_1af521fd1be3.slice. Dec 16 12:57:55.174999 kubelet[2968]: I1216 12:57:55.174350 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f90d7033-1293-4318-a198-1af521fd1be3-flexvol-driver-host\") pod \"calico-node-hpxl7\" (UID: \"f90d7033-1293-4318-a198-1af521fd1be3\") " pod="calico-system/calico-node-hpxl7" Dec 16 12:57:55.174999 kubelet[2968]: I1216 12:57:55.174417 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f90d7033-1293-4318-a198-1af521fd1be3-node-certs\") pod \"calico-node-hpxl7\" (UID: \"f90d7033-1293-4318-a198-1af521fd1be3\") " pod="calico-system/calico-node-hpxl7" Dec 16 12:57:55.174999 kubelet[2968]: I1216 12:57:55.174474 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f90d7033-1293-4318-a198-1af521fd1be3-policysync\") pod \"calico-node-hpxl7\" (UID: \"f90d7033-1293-4318-a198-1af521fd1be3\") " pod="calico-system/calico-node-hpxl7" Dec 16 12:57:55.174999 kubelet[2968]: I1216 12:57:55.174519 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f90d7033-1293-4318-a198-1af521fd1be3-lib-modules\") pod \"calico-node-hpxl7\" (UID: \"f90d7033-1293-4318-a198-1af521fd1be3\") " pod="calico-system/calico-node-hpxl7" Dec 16 12:57:55.174999 kubelet[2968]: I1216 12:57:55.174549 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f90d7033-1293-4318-a198-1af521fd1be3-cni-bin-dir\") pod \"calico-node-hpxl7\" (UID: \"f90d7033-1293-4318-a198-1af521fd1be3\") " pod="calico-system/calico-node-hpxl7" Dec 16 12:57:55.175380 kubelet[2968]: I1216 12:57:55.174576 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f90d7033-1293-4318-a198-1af521fd1be3-cni-log-dir\") pod \"calico-node-hpxl7\" (UID: \"f90d7033-1293-4318-a198-1af521fd1be3\") " pod="calico-system/calico-node-hpxl7" Dec 16 12:57:55.175380 kubelet[2968]: I1216 12:57:55.174664 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt7jc\" (UniqueName: \"kubernetes.io/projected/f90d7033-1293-4318-a198-1af521fd1be3-kube-api-access-nt7jc\") pod \"calico-node-hpxl7\" (UID: \"f90d7033-1293-4318-a198-1af521fd1be3\") " pod="calico-system/calico-node-hpxl7" Dec 16 12:57:55.175380 kubelet[2968]: I1216 12:57:55.174699 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f90d7033-1293-4318-a198-1af521fd1be3-tigera-ca-bundle\") pod \"calico-node-hpxl7\" (UID: \"f90d7033-1293-4318-a198-1af521fd1be3\") " pod="calico-system/calico-node-hpxl7" Dec 16 12:57:55.175380 kubelet[2968]: I1216 12:57:55.174725 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f90d7033-1293-4318-a198-1af521fd1be3-var-lib-calico\") pod \"calico-node-hpxl7\" (UID: \"f90d7033-1293-4318-a198-1af521fd1be3\") " pod="calico-system/calico-node-hpxl7" Dec 16 12:57:55.175380 kubelet[2968]: I1216 12:57:55.174760 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f90d7033-1293-4318-a198-1af521fd1be3-var-run-calico\") pod \"calico-node-hpxl7\" (UID: \"f90d7033-1293-4318-a198-1af521fd1be3\") " pod="calico-system/calico-node-hpxl7" Dec 16 12:57:55.176175 kubelet[2968]: I1216 12:57:55.174802 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f90d7033-1293-4318-a198-1af521fd1be3-xtables-lock\") pod \"calico-node-hpxl7\" (UID: \"f90d7033-1293-4318-a198-1af521fd1be3\") " pod="calico-system/calico-node-hpxl7" Dec 16 12:57:55.176175 kubelet[2968]: I1216 12:57:55.174844 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f90d7033-1293-4318-a198-1af521fd1be3-cni-net-dir\") pod \"calico-node-hpxl7\" (UID: \"f90d7033-1293-4318-a198-1af521fd1be3\") " pod="calico-system/calico-node-hpxl7" Dec 16 12:57:55.246415 kubelet[2968]: E1216 12:57:55.245781 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-frm2n" podUID="ae2d98cb-e462-4622-a2ef-d1063c3df86a" Dec 16 12:57:55.276301 kubelet[2968]: I1216 12:57:55.276244 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ae2d98cb-e462-4622-a2ef-d1063c3df86a-varrun\") pod \"csi-node-driver-frm2n\" (UID: \"ae2d98cb-e462-4622-a2ef-d1063c3df86a\") " pod="calico-system/csi-node-driver-frm2n" Dec 16 12:57:55.276497 kubelet[2968]: I1216 12:57:55.276374 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ae2d98cb-e462-4622-a2ef-d1063c3df86a-socket-dir\") pod \"csi-node-driver-frm2n\" (UID: \"ae2d98cb-e462-4622-a2ef-d1063c3df86a\") " pod="calico-system/csi-node-driver-frm2n" Dec 16 12:57:55.276497 kubelet[2968]: I1216 12:57:55.276424 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae2d98cb-e462-4622-a2ef-d1063c3df86a-kubelet-dir\") pod \"csi-node-driver-frm2n\" (UID: \"ae2d98cb-e462-4622-a2ef-d1063c3df86a\") " pod="calico-system/csi-node-driver-frm2n" Dec 16 12:57:55.276497 kubelet[2968]: I1216 12:57:55.276479 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ae2d98cb-e462-4622-a2ef-d1063c3df86a-registration-dir\") pod \"csi-node-driver-frm2n\" (UID: \"ae2d98cb-e462-4622-a2ef-d1063c3df86a\") " pod="calico-system/csi-node-driver-frm2n" Dec 16 12:57:55.276666 kubelet[2968]: I1216 12:57:55.276511 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jrd7\" (UniqueName: \"kubernetes.io/projected/ae2d98cb-e462-4622-a2ef-d1063c3df86a-kube-api-access-6jrd7\") pod \"csi-node-driver-frm2n\" (UID: \"ae2d98cb-e462-4622-a2ef-d1063c3df86a\") " pod="calico-system/csi-node-driver-frm2n" Dec 16 12:57:55.283354 kubelet[2968]: E1216 12:57:55.282917 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.284831 kubelet[2968]: W1216 12:57:55.283905 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.284903 containerd[1627]: time="2025-12-16T12:57:55.284202101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cb88554d-2l4qk,Uid:a0898314-15f9-4660-907b-20ecb9ae18cc,Namespace:calico-system,Attempt:0,}" Dec 16 12:57:55.286324 kubelet[2968]: E1216 12:57:55.285536 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.286666 kubelet[2968]: E1216 12:57:55.286527 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.286666 kubelet[2968]: W1216 12:57:55.286548 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.286666 kubelet[2968]: E1216 12:57:55.286565 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.288982 kubelet[2968]: E1216 12:57:55.288054 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.288982 kubelet[2968]: W1216 12:57:55.288074 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.288982 kubelet[2968]: E1216 12:57:55.288102 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.289595 kubelet[2968]: E1216 12:57:55.289531 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.289595 kubelet[2968]: W1216 12:57:55.289555 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.289595 kubelet[2968]: E1216 12:57:55.289572 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.291837 kubelet[2968]: E1216 12:57:55.291073 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.291837 kubelet[2968]: W1216 12:57:55.291785 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.291837 kubelet[2968]: E1216 12:57:55.291813 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.292856 kubelet[2968]: E1216 12:57:55.292794 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.292856 kubelet[2968]: W1216 12:57:55.292815 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.292856 kubelet[2968]: E1216 12:57:55.292833 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.294875 kubelet[2968]: E1216 12:57:55.294816 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.294875 kubelet[2968]: W1216 12:57:55.294836 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.294875 kubelet[2968]: E1216 12:57:55.294852 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.297410 kubelet[2968]: E1216 12:57:55.296763 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.297410 kubelet[2968]: W1216 12:57:55.296784 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.297410 kubelet[2968]: E1216 12:57:55.296804 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.299145 kubelet[2968]: E1216 12:57:55.299122 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.299361 kubelet[2968]: W1216 12:57:55.299250 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.299930 kubelet[2968]: E1216 12:57:55.299796 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.300630 kubelet[2968]: E1216 12:57:55.300528 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.300630 kubelet[2968]: W1216 12:57:55.300548 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.300630 kubelet[2968]: E1216 12:57:55.300564 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.301942 kubelet[2968]: E1216 12:57:55.301755 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.301942 kubelet[2968]: W1216 12:57:55.301774 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.301942 kubelet[2968]: E1216 12:57:55.301791 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.305289 kubelet[2968]: E1216 12:57:55.304828 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.305289 kubelet[2968]: W1216 12:57:55.304967 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.305289 kubelet[2968]: E1216 12:57:55.304992 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.306231 kubelet[2968]: E1216 12:57:55.305800 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.306231 kubelet[2968]: W1216 12:57:55.305821 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.306231 kubelet[2968]: E1216 12:57:55.305837 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.308007 kubelet[2968]: E1216 12:57:55.307786 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.308007 kubelet[2968]: W1216 12:57:55.307807 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.309010 kubelet[2968]: E1216 12:57:55.307823 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.314062 kubelet[2968]: E1216 12:57:55.314038 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.314331 kubelet[2968]: W1216 12:57:55.314209 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.314331 kubelet[2968]: E1216 12:57:55.314240 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.315455 kubelet[2968]: E1216 12:57:55.315257 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.315966 kubelet[2968]: W1216 12:57:55.315672 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.315966 kubelet[2968]: E1216 12:57:55.315697 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.317072 kubelet[2968]: E1216 12:57:55.317051 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.317306 kubelet[2968]: W1216 12:57:55.317219 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.317630 kubelet[2968]: E1216 12:57:55.317247 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.318291 kubelet[2968]: E1216 12:57:55.318136 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.318291 kubelet[2968]: W1216 12:57:55.318171 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.318291 kubelet[2968]: E1216 12:57:55.318189 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.319389 kubelet[2968]: E1216 12:57:55.319297 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.319389 kubelet[2968]: W1216 12:57:55.319316 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.319389 kubelet[2968]: E1216 12:57:55.319346 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.320639 kubelet[2968]: E1216 12:57:55.320386 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.320639 kubelet[2968]: W1216 12:57:55.320485 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.320639 kubelet[2968]: E1216 12:57:55.320511 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.322171 kubelet[2968]: E1216 12:57:55.322046 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.322171 kubelet[2968]: W1216 12:57:55.322067 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.322171 kubelet[2968]: E1216 12:57:55.322085 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.324375 kubelet[2968]: E1216 12:57:55.324262 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.324375 kubelet[2968]: W1216 12:57:55.324283 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.324375 kubelet[2968]: E1216 12:57:55.324300 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.328756 kubelet[2968]: E1216 12:57:55.328240 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.328897 kubelet[2968]: W1216 12:57:55.328873 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.329057 kubelet[2968]: E1216 12:57:55.329034 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.331310 kubelet[2968]: E1216 12:57:55.331249 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.331310 kubelet[2968]: W1216 12:57:55.331269 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.331310 kubelet[2968]: E1216 12:57:55.331286 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.331788 kubelet[2968]: E1216 12:57:55.331732 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.331788 kubelet[2968]: W1216 12:57:55.331750 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.331788 kubelet[2968]: E1216 12:57:55.331766 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.332456 kubelet[2968]: E1216 12:57:55.332382 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.332456 kubelet[2968]: W1216 12:57:55.332413 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.332456 kubelet[2968]: E1216 12:57:55.332432 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.335063 kubelet[2968]: E1216 12:57:55.334625 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.335063 kubelet[2968]: W1216 12:57:55.334648 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.335063 kubelet[2968]: E1216 12:57:55.334666 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.338002 kubelet[2968]: E1216 12:57:55.336020 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.338002 kubelet[2968]: W1216 12:57:55.336049 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.338002 kubelet[2968]: E1216 12:57:55.336070 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.338002 kubelet[2968]: E1216 12:57:55.336436 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.338002 kubelet[2968]: W1216 12:57:55.336450 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.338002 kubelet[2968]: E1216 12:57:55.336489 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.338002 kubelet[2968]: E1216 12:57:55.337170 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.338002 kubelet[2968]: W1216 12:57:55.337185 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.338002 kubelet[2968]: E1216 12:57:55.337400 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.338545 kubelet[2968]: E1216 12:57:55.338198 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.338545 kubelet[2968]: W1216 12:57:55.338215 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.338545 kubelet[2968]: E1216 12:57:55.338232 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.345173 kubelet[2968]: E1216 12:57:55.345074 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.345984 kubelet[2968]: W1216 12:57:55.345339 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.345984 kubelet[2968]: E1216 12:57:55.345372 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.346297 kubelet[2968]: E1216 12:57:55.346275 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.346463 kubelet[2968]: W1216 12:57:55.346391 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.348239 kubelet[2968]: E1216 12:57:55.347723 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.353300 kubelet[2968]: E1216 12:57:55.352997 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.353300 kubelet[2968]: W1216 12:57:55.353030 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.355007 kubelet[2968]: E1216 12:57:55.353055 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.394390 kubelet[2968]: E1216 12:57:55.394275 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.396898 containerd[1627]: time="2025-12-16T12:57:55.395906418Z" level=info msg="connecting to shim 9c51361114eb1bfda959f1effa39821eaab0b56dca3e7da3df2891a4b224e08b" address="unix:///run/containerd/s/abbc56e12beede8e4302a69a657bbe31a41c732268c65845baf9c9d116773d5a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:57:55.397038 kubelet[2968]: W1216 12:57:55.396016 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.397038 kubelet[2968]: E1216 12:57:55.396067 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.400867 kubelet[2968]: E1216 12:57:55.400805 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.400867 kubelet[2968]: W1216 12:57:55.400829 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.400867 kubelet[2968]: E1216 12:57:55.400848 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.402449 kubelet[2968]: E1216 12:57:55.401495 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.402449 kubelet[2968]: W1216 12:57:55.401510 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.402449 kubelet[2968]: E1216 12:57:55.401526 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.402449 kubelet[2968]: E1216 12:57:55.402186 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.402449 kubelet[2968]: W1216 12:57:55.402258 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.402449 kubelet[2968]: E1216 12:57:55.402324 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.404511 kubelet[2968]: E1216 12:57:55.402840 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.404511 kubelet[2968]: W1216 12:57:55.402863 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.404511 kubelet[2968]: E1216 12:57:55.402880 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.404511 kubelet[2968]: E1216 12:57:55.404029 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.404511 kubelet[2968]: W1216 12:57:55.404065 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.404511 kubelet[2968]: E1216 12:57:55.404084 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.404511 kubelet[2968]: E1216 12:57:55.404424 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.404511 kubelet[2968]: W1216 12:57:55.404458 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.404511 kubelet[2968]: E1216 12:57:55.404475 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.405943 kubelet[2968]: E1216 12:57:55.405126 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.405943 kubelet[2968]: W1216 12:57:55.405166 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.405943 kubelet[2968]: E1216 12:57:55.405184 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.406462 kubelet[2968]: E1216 12:57:55.405851 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.406462 kubelet[2968]: W1216 12:57:55.406004 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.406462 kubelet[2968]: E1216 12:57:55.406025 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.407265 kubelet[2968]: E1216 12:57:55.407018 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.407265 kubelet[2968]: W1216 12:57:55.407032 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.407265 kubelet[2968]: E1216 12:57:55.407078 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.407942 kubelet[2968]: E1216 12:57:55.407496 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.407942 kubelet[2968]: W1216 12:57:55.407511 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.407942 kubelet[2968]: E1216 12:57:55.407527 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.408564 kubelet[2968]: E1216 12:57:55.408042 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.408564 kubelet[2968]: W1216 12:57:55.408076 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.408564 kubelet[2968]: E1216 12:57:55.408109 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.408564 kubelet[2968]: E1216 12:57:55.408374 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.408564 kubelet[2968]: W1216 12:57:55.408407 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.408564 kubelet[2968]: E1216 12:57:55.408423 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.409975 kubelet[2968]: E1216 12:57:55.408818 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.409975 kubelet[2968]: W1216 12:57:55.408839 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.409975 kubelet[2968]: E1216 12:57:55.408857 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.409975 kubelet[2968]: E1216 12:57:55.409177 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.409975 kubelet[2968]: W1216 12:57:55.409190 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.409975 kubelet[2968]: E1216 12:57:55.409229 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.409975 kubelet[2968]: E1216 12:57:55.409481 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.409975 kubelet[2968]: W1216 12:57:55.409494 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.409975 kubelet[2968]: E1216 12:57:55.409529 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.409975 kubelet[2968]: E1216 12:57:55.409842 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.411981 kubelet[2968]: W1216 12:57:55.409855 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.411981 kubelet[2968]: E1216 12:57:55.409898 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.411981 kubelet[2968]: E1216 12:57:55.410215 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.411981 kubelet[2968]: W1216 12:57:55.410229 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.411981 kubelet[2968]: E1216 12:57:55.410267 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.411981 kubelet[2968]: E1216 12:57:55.410656 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.411981 kubelet[2968]: W1216 12:57:55.410733 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.411981 kubelet[2968]: E1216 12:57:55.410758 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.411981 kubelet[2968]: E1216 12:57:55.411288 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.411981 kubelet[2968]: W1216 12:57:55.411302 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.412390 kubelet[2968]: E1216 12:57:55.411340 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.414004 kubelet[2968]: E1216 12:57:55.413159 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.414004 kubelet[2968]: W1216 12:57:55.413199 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.414004 kubelet[2968]: E1216 12:57:55.413218 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.414004 kubelet[2968]: E1216 12:57:55.413497 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.414004 kubelet[2968]: W1216 12:57:55.413532 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.414004 kubelet[2968]: E1216 12:57:55.413549 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.415256 kubelet[2968]: E1216 12:57:55.414035 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.415256 kubelet[2968]: W1216 12:57:55.414049 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.415256 kubelet[2968]: E1216 12:57:55.414064 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.416973 kubelet[2968]: E1216 12:57:55.416877 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.416973 kubelet[2968]: W1216 12:57:55.416898 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.416973 kubelet[2968]: E1216 12:57:55.416917 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.419718 kubelet[2968]: E1216 12:57:55.418312 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.419718 kubelet[2968]: W1216 12:57:55.418333 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.419718 kubelet[2968]: E1216 12:57:55.418350 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.420678 kubelet[2968]: E1216 12:57:55.420656 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.421161 kubelet[2968]: W1216 12:57:55.421075 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.425029 kubelet[2968]: E1216 12:57:55.425002 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.437293 kubelet[2968]: E1216 12:57:55.437260 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:55.437512 kubelet[2968]: W1216 12:57:55.437478 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:55.437656 kubelet[2968]: E1216 12:57:55.437604 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:55.445082 containerd[1627]: time="2025-12-16T12:57:55.445030346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hpxl7,Uid:f90d7033-1293-4318-a198-1af521fd1be3,Namespace:calico-system,Attempt:0,}" Dec 16 12:57:55.483268 systemd[1]: Started cri-containerd-9c51361114eb1bfda959f1effa39821eaab0b56dca3e7da3df2891a4b224e08b.scope - libcontainer container 9c51361114eb1bfda959f1effa39821eaab0b56dca3e7da3df2891a4b224e08b. Dec 16 12:57:55.495177 containerd[1627]: time="2025-12-16T12:57:55.495125685Z" level=info msg="connecting to shim b6a415226c248899a06730d4becfbc65bd173778f3d9e6485e427240d9892b84" address="unix:///run/containerd/s/7c003c15831c55dc353407db50724fa49426a1ebaec6458ca788df9722daa9b3" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:57:55.513000 audit: BPF prog-id=155 op=LOAD Dec 16 12:57:55.516000 audit: BPF prog-id=156 op=LOAD Dec 16 12:57:55.516000 audit[3472]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=3434 pid=3472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:55.516000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963353133363131313465623162666461393539663165666661333938 Dec 16 12:57:55.516000 audit: BPF prog-id=156 op=UNLOAD Dec 16 12:57:55.516000 audit[3472]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3434 pid=3472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:55.516000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963353133363131313465623162666461393539663165666661333938 Dec 16 12:57:55.519000 audit: BPF prog-id=157 op=LOAD Dec 16 12:57:55.519000 audit[3472]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3434 pid=3472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:55.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963353133363131313465623162666461393539663165666661333938 Dec 16 12:57:55.521000 audit: BPF prog-id=158 op=LOAD Dec 16 12:57:55.521000 audit[3472]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=3434 pid=3472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:55.521000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963353133363131313465623162666461393539663165666661333938 Dec 16 12:57:55.522000 audit: BPF prog-id=158 op=UNLOAD Dec 16 12:57:55.522000 audit[3472]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3434 pid=3472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:55.522000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963353133363131313465623162666461393539663165666661333938 Dec 16 12:57:55.522000 audit: BPF prog-id=157 op=UNLOAD Dec 16 12:57:55.522000 audit[3472]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3434 pid=3472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:55.522000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963353133363131313465623162666461393539663165666661333938 Dec 16 12:57:55.522000 audit: BPF prog-id=159 op=LOAD Dec 16 12:57:55.522000 audit[3472]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=3434 pid=3472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:55.522000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963353133363131313465623162666461393539663165666661333938 Dec 16 12:57:55.556350 systemd[1]: Started cri-containerd-b6a415226c248899a06730d4becfbc65bd173778f3d9e6485e427240d9892b84.scope - libcontainer container b6a415226c248899a06730d4becfbc65bd173778f3d9e6485e427240d9892b84. Dec 16 12:57:55.600000 audit: BPF prog-id=160 op=LOAD Dec 16 12:57:55.602000 audit: BPF prog-id=161 op=LOAD Dec 16 12:57:55.602000 audit[3513]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3502 pid=3513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:55.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236613431353232366332343838393961303637333064346265636662 Dec 16 12:57:55.603000 audit: BPF prog-id=161 op=UNLOAD Dec 16 12:57:55.603000 audit[3513]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3502 pid=3513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:55.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236613431353232366332343838393961303637333064346265636662 Dec 16 12:57:55.604000 audit: BPF prog-id=162 op=LOAD Dec 16 12:57:55.604000 audit[3513]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3502 pid=3513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:55.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236613431353232366332343838393961303637333064346265636662 Dec 16 12:57:55.604000 audit: BPF prog-id=163 op=LOAD Dec 16 12:57:55.604000 audit[3513]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3502 pid=3513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:55.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236613431353232366332343838393961303637333064346265636662 Dec 16 12:57:55.605000 audit: BPF prog-id=163 op=UNLOAD Dec 16 12:57:55.605000 audit[3513]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3502 pid=3513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:55.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236613431353232366332343838393961303637333064346265636662 Dec 16 12:57:55.605000 audit: BPF prog-id=162 op=UNLOAD Dec 16 12:57:55.605000 audit[3513]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3502 pid=3513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:55.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236613431353232366332343838393961303637333064346265636662 Dec 16 12:57:55.605000 audit: BPF prog-id=164 op=LOAD Dec 16 12:57:55.605000 audit[3513]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3502 pid=3513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:55.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236613431353232366332343838393961303637333064346265636662 Dec 16 12:57:55.611818 containerd[1627]: time="2025-12-16T12:57:55.611688108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cb88554d-2l4qk,Uid:a0898314-15f9-4660-907b-20ecb9ae18cc,Namespace:calico-system,Attempt:0,} returns sandbox id \"9c51361114eb1bfda959f1effa39821eaab0b56dca3e7da3df2891a4b224e08b\"" Dec 16 12:57:55.617408 containerd[1627]: time="2025-12-16T12:57:55.617218757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 12:57:55.641929 containerd[1627]: time="2025-12-16T12:57:55.641636833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hpxl7,Uid:f90d7033-1293-4318-a198-1af521fd1be3,Namespace:calico-system,Attempt:0,} returns sandbox id \"b6a415226c248899a06730d4becfbc65bd173778f3d9e6485e427240d9892b84\"" Dec 16 12:57:56.047000 audit[3545]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3545 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:57:56.047000 audit[3545]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffea3ae4d50 a2=0 a3=7ffea3ae4d3c items=0 ppid=3129 pid=3545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:56.047000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:57:56.054000 audit[3545]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3545 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:57:56.054000 audit[3545]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffea3ae4d50 a2=0 a3=0 items=0 ppid=3129 pid=3545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:56.054000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:57:56.729988 kubelet[2968]: E1216 12:57:56.729620 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-frm2n" podUID="ae2d98cb-e462-4622-a2ef-d1063c3df86a" Dec 16 12:57:57.362443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount195566175.mount: Deactivated successfully. Dec 16 12:57:58.729605 kubelet[2968]: E1216 12:57:58.729403 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-frm2n" podUID="ae2d98cb-e462-4622-a2ef-d1063c3df86a" Dec 16 12:57:59.392002 containerd[1627]: time="2025-12-16T12:57:59.391613636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:59.394932 containerd[1627]: time="2025-12-16T12:57:59.394735831Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Dec 16 12:57:59.396464 containerd[1627]: time="2025-12-16T12:57:59.396375424Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:59.402856 containerd[1627]: time="2025-12-16T12:57:59.402650701Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:59.404697 containerd[1627]: time="2025-12-16T12:57:59.404218931Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.786936066s" Dec 16 12:57:59.404697 containerd[1627]: time="2025-12-16T12:57:59.404292498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 16 12:57:59.407208 containerd[1627]: time="2025-12-16T12:57:59.407164010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 12:57:59.433109 containerd[1627]: time="2025-12-16T12:57:59.432939914Z" level=info msg="CreateContainer within sandbox \"9c51361114eb1bfda959f1effa39821eaab0b56dca3e7da3df2891a4b224e08b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 12:57:59.448994 containerd[1627]: time="2025-12-16T12:57:59.447135135Z" level=info msg="Container bbe35cb906b4d7344d41100b3de170c5154806a8a48c2260fdb1ccb8c6c5fd3e: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:57:59.480597 containerd[1627]: time="2025-12-16T12:57:59.480498597Z" level=info msg="CreateContainer within sandbox \"9c51361114eb1bfda959f1effa39821eaab0b56dca3e7da3df2891a4b224e08b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"bbe35cb906b4d7344d41100b3de170c5154806a8a48c2260fdb1ccb8c6c5fd3e\"" Dec 16 12:57:59.482670 containerd[1627]: time="2025-12-16T12:57:59.482455086Z" level=info msg="StartContainer for \"bbe35cb906b4d7344d41100b3de170c5154806a8a48c2260fdb1ccb8c6c5fd3e\"" Dec 16 12:57:59.484390 containerd[1627]: time="2025-12-16T12:57:59.484357338Z" level=info msg="connecting to shim bbe35cb906b4d7344d41100b3de170c5154806a8a48c2260fdb1ccb8c6c5fd3e" address="unix:///run/containerd/s/abbc56e12beede8e4302a69a657bbe31a41c732268c65845baf9c9d116773d5a" protocol=ttrpc version=3 Dec 16 12:57:59.550206 systemd[1]: Started cri-containerd-bbe35cb906b4d7344d41100b3de170c5154806a8a48c2260fdb1ccb8c6c5fd3e.scope - libcontainer container bbe35cb906b4d7344d41100b3de170c5154806a8a48c2260fdb1ccb8c6c5fd3e. Dec 16 12:57:59.592404 kernel: kauditd_printk_skb: 52 callbacks suppressed Dec 16 12:57:59.592630 kernel: audit: type=1334 audit(1765889879.587:555): prog-id=165 op=LOAD Dec 16 12:57:59.587000 audit: BPF prog-id=165 op=LOAD Dec 16 12:57:59.592000 audit: BPF prog-id=166 op=LOAD Dec 16 12:57:59.592000 audit[3556]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3434 pid=3556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:59.596977 kernel: audit: type=1334 audit(1765889879.592:556): prog-id=166 op=LOAD Dec 16 12:57:59.597071 kernel: audit: type=1300 audit(1765889879.592:556): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3434 pid=3556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:59.592000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262653335636239303662346437333434643431313030623364653137 Dec 16 12:57:59.602393 kernel: audit: type=1327 audit(1765889879.592:556): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262653335636239303662346437333434643431313030623364653137 Dec 16 12:57:59.592000 audit: BPF prog-id=166 op=UNLOAD Dec 16 12:57:59.606445 kernel: audit: type=1334 audit(1765889879.592:557): prog-id=166 op=UNLOAD Dec 16 12:57:59.592000 audit[3556]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3434 pid=3556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:59.609120 kernel: audit: type=1300 audit(1765889879.592:557): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3434 pid=3556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:59.592000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262653335636239303662346437333434643431313030623364653137 Dec 16 12:57:59.614325 kernel: audit: type=1327 audit(1765889879.592:557): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262653335636239303662346437333434643431313030623364653137 Dec 16 12:57:59.593000 audit: BPF prog-id=167 op=LOAD Dec 16 12:57:59.593000 audit[3556]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3434 pid=3556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:59.621369 kernel: audit: type=1334 audit(1765889879.593:558): prog-id=167 op=LOAD Dec 16 12:57:59.621464 kernel: audit: type=1300 audit(1765889879.593:558): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3434 pid=3556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:59.593000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262653335636239303662346437333434643431313030623364653137 Dec 16 12:57:59.626604 kernel: audit: type=1327 audit(1765889879.593:558): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262653335636239303662346437333434643431313030623364653137 Dec 16 12:57:59.593000 audit: BPF prog-id=168 op=LOAD Dec 16 12:57:59.593000 audit[3556]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3434 pid=3556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:59.593000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262653335636239303662346437333434643431313030623364653137 Dec 16 12:57:59.593000 audit: BPF prog-id=168 op=UNLOAD Dec 16 12:57:59.593000 audit[3556]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3434 pid=3556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:59.593000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262653335636239303662346437333434643431313030623364653137 Dec 16 12:57:59.593000 audit: BPF prog-id=167 op=UNLOAD Dec 16 12:57:59.593000 audit[3556]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3434 pid=3556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:59.593000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262653335636239303662346437333434643431313030623364653137 Dec 16 12:57:59.593000 audit: BPF prog-id=169 op=LOAD Dec 16 12:57:59.593000 audit[3556]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3434 pid=3556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:59.593000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262653335636239303662346437333434643431313030623364653137 Dec 16 12:57:59.676601 containerd[1627]: time="2025-12-16T12:57:59.675753514Z" level=info msg="StartContainer for \"bbe35cb906b4d7344d41100b3de170c5154806a8a48c2260fdb1ccb8c6c5fd3e\" returns successfully" Dec 16 12:57:59.986099 kubelet[2968]: E1216 12:57:59.985730 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:59.986099 kubelet[2968]: W1216 12:57:59.985766 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:59.986099 kubelet[2968]: E1216 12:57:59.985807 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:59.986816 kubelet[2968]: E1216 12:57:59.986129 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:59.986816 kubelet[2968]: W1216 12:57:59.986145 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:59.986816 kubelet[2968]: E1216 12:57:59.986162 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:59.988077 kubelet[2968]: E1216 12:57:59.987132 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:59.988077 kubelet[2968]: W1216 12:57:59.987221 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:59.988077 kubelet[2968]: E1216 12:57:59.987239 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:59.997380 kubelet[2968]: E1216 12:57:59.997324 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:59.998329 kubelet[2968]: W1216 12:57:59.997985 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:59.998329 kubelet[2968]: E1216 12:57:59.998037 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:57:59.998804 kubelet[2968]: E1216 12:57:59.998754 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:57:59.998804 kubelet[2968]: W1216 12:57:59.998778 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:57:59.998994 kubelet[2968]: E1216 12:57:59.998971 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:00.000501 kubelet[2968]: E1216 12:58:00.000455 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:00.000788 kubelet[2968]: W1216 12:58:00.000717 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:00.000788 kubelet[2968]: E1216 12:58:00.000744 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:00.001534 kubelet[2968]: E1216 12:58:00.001491 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:00.001534 kubelet[2968]: W1216 12:58:00.001515 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:00.001534 kubelet[2968]: E1216 12:58:00.001533 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:00.002635 kubelet[2968]: E1216 12:58:00.002323 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:00.002635 kubelet[2968]: W1216 12:58:00.002339 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:00.002635 kubelet[2968]: E1216 12:58:00.002358 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:00.003311 kubelet[2968]: E1216 12:58:00.003284 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:00.003311 kubelet[2968]: W1216 12:58:00.003306 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:00.003556 kubelet[2968]: E1216 12:58:00.003323 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:00.004293 kubelet[2968]: E1216 12:58:00.004186 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:00.004293 kubelet[2968]: W1216 12:58:00.004228 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:00.004293 kubelet[2968]: E1216 12:58:00.004247 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:00.005928 kubelet[2968]: E1216 12:58:00.004982 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:00.006232 kubelet[2968]: W1216 12:58:00.005019 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:00.006232 kubelet[2968]: E1216 12:58:00.006168 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:00.007411 kubelet[2968]: E1216 12:58:00.006944 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:00.007411 kubelet[2968]: W1216 12:58:00.007172 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:00.007411 kubelet[2968]: E1216 12:58:00.007193 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:00.008530 kubelet[2968]: E1216 12:58:00.008436 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:00.009082 kubelet[2968]: W1216 12:58:00.008456 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:00.009082 kubelet[2968]: E1216 12:58:00.008925 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:00.010411 kubelet[2968]: E1216 12:58:00.010365 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:00.010411 kubelet[2968]: W1216 12:58:00.010386 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:00.011207 kubelet[2968]: E1216 12:58:00.010575 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:00.013148 kubelet[2968]: E1216 12:58:00.011781 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:00.013148 kubelet[2968]: W1216 12:58:00.011802 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:00.013148 kubelet[2968]: E1216 12:58:00.011822 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:00.014055 kubelet[2968]: E1216 12:58:00.014020 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:00.016483 kubelet[2968]: W1216 12:58:00.016043 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:00.016483 kubelet[2968]: E1216 12:58:00.016077 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:00.016483 kubelet[2968]: E1216 12:58:00.016384 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:00.016483 kubelet[2968]: W1216 12:58:00.016398 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:00.016483 kubelet[2968]: E1216 12:58:00.016414 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:00.017361 kubelet[2968]: E1216 12:58:00.017019 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:00.017361 kubelet[2968]: W1216 12:58:00.017118 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:00.017361 kubelet[2968]: E1216 12:58:00.017138 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:00.017889 kubelet[2968]: E1216 12:58:00.017842 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:00.017889 kubelet[2968]: W1216 12:58:00.017863 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:00.017889 kubelet[2968]: E1216 12:58:00.017891 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:00.018866 kubelet[2968]: E1216 12:58:00.018210 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:00.018866 kubelet[2968]: W1216 12:58:00.018244 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:00.018866 kubelet[2968]: E1216 12:58:00.018263 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:00.018866 kubelet[2968]: E1216 12:58:00.018607 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:00.018866 kubelet[2968]: W1216 12:58:00.018621 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:00.018866 kubelet[2968]: E1216 12:58:00.018637 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:00.019198 kubelet[2968]: E1216 12:58:00.019129 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:00.019198 kubelet[2968]: W1216 12:58:00.019146 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:00.019198 kubelet[2968]: E1216 12:58:00.019162 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:00.020885 kubelet[2968]: E1216 12:58:00.019946 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:00.020885 kubelet[2968]: W1216 12:58:00.019989 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:00.020885 kubelet[2968]: E1216 12:58:00.020007 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:00.023099 kubelet[2968]: E1216 12:58:00.023068 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:00.023099 kubelet[2968]: W1216 12:58:00.023092 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:00.023270 kubelet[2968]: E1216 12:58:00.023110 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:00.023400 kubelet[2968]: E1216 12:58:00.023374 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:00.023400 kubelet[2968]: W1216 12:58:00.023395 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:00.023526 kubelet[2968]: E1216 12:58:00.023412 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:00.027049 kubelet[2968]: E1216 12:58:00.026994 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:00.027049 kubelet[2968]: W1216 12:58:00.027038 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:00.028188 kubelet[2968]: E1216 12:58:00.027065 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:00.028188 kubelet[2968]: E1216 12:58:00.027421 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:00.028188 kubelet[2968]: W1216 12:58:00.027436 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:00.028188 kubelet[2968]: E1216 12:58:00.027452 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:00.028188 kubelet[2968]: E1216 12:58:00.028171 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:00.028188 kubelet[2968]: W1216 12:58:00.028187 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:00.030021 kubelet[2968]: E1216 12:58:00.028203 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:00.030021 kubelet[2968]: E1216 12:58:00.029016 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:00.030021 kubelet[2968]: W1216 12:58:00.029049 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:00.030021 kubelet[2968]: E1216 12:58:00.029067 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:00.032171 kubelet[2968]: E1216 12:58:00.030195 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:00.032171 kubelet[2968]: W1216 12:58:00.030211 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:00.032171 kubelet[2968]: E1216 12:58:00.030227 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:00.032171 kubelet[2968]: E1216 12:58:00.032141 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:00.032171 kubelet[2968]: W1216 12:58:00.032158 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:00.032849 kubelet[2968]: E1216 12:58:00.032175 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:00.037003 kubelet[2968]: E1216 12:58:00.033148 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:00.037003 kubelet[2968]: W1216 12:58:00.033172 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:00.037003 kubelet[2968]: E1216 12:58:00.033190 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:00.037003 kubelet[2968]: E1216 12:58:00.034298 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:00.037003 kubelet[2968]: W1216 12:58:00.034313 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:00.037003 kubelet[2968]: E1216 12:58:00.034329 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:00.729413 kubelet[2968]: E1216 12:58:00.729224 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-frm2n" podUID="ae2d98cb-e462-4622-a2ef-d1063c3df86a" Dec 16 12:58:00.922066 kubelet[2968]: I1216 12:58:00.921641 2968 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:58:01.019914 kubelet[2968]: E1216 12:58:01.019696 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:01.019914 kubelet[2968]: W1216 12:58:01.019745 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:01.019914 kubelet[2968]: E1216 12:58:01.019785 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:01.021831 kubelet[2968]: E1216 12:58:01.020583 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:01.021831 kubelet[2968]: W1216 12:58:01.020599 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:01.021831 kubelet[2968]: E1216 12:58:01.020622 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:01.021831 kubelet[2968]: E1216 12:58:01.020895 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:01.021831 kubelet[2968]: W1216 12:58:01.020910 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:01.021831 kubelet[2968]: E1216 12:58:01.020925 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:01.021831 kubelet[2968]: E1216 12:58:01.021335 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:01.021831 kubelet[2968]: W1216 12:58:01.021350 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:01.021831 kubelet[2968]: E1216 12:58:01.021366 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:01.021831 kubelet[2968]: E1216 12:58:01.021649 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:01.022328 kubelet[2968]: W1216 12:58:01.021663 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:01.022328 kubelet[2968]: E1216 12:58:01.021678 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:01.022328 kubelet[2968]: E1216 12:58:01.021925 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:01.022328 kubelet[2968]: W1216 12:58:01.021939 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:01.022328 kubelet[2968]: E1216 12:58:01.021982 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:01.022328 kubelet[2968]: E1216 12:58:01.022241 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:01.022328 kubelet[2968]: W1216 12:58:01.022255 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:01.022328 kubelet[2968]: E1216 12:58:01.022270 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:01.024042 kubelet[2968]: E1216 12:58:01.022531 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:01.024042 kubelet[2968]: W1216 12:58:01.022544 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:01.024042 kubelet[2968]: E1216 12:58:01.022559 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:01.024042 kubelet[2968]: E1216 12:58:01.022875 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:01.024042 kubelet[2968]: W1216 12:58:01.022890 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:01.024042 kubelet[2968]: E1216 12:58:01.022919 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:01.024042 kubelet[2968]: E1216 12:58:01.023229 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:01.024042 kubelet[2968]: W1216 12:58:01.023255 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:01.024042 kubelet[2968]: E1216 12:58:01.023275 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:01.024042 kubelet[2968]: E1216 12:58:01.023554 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:01.024890 kubelet[2968]: W1216 12:58:01.023569 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:01.024890 kubelet[2968]: E1216 12:58:01.023583 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:01.024890 kubelet[2968]: E1216 12:58:01.023822 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:01.024890 kubelet[2968]: W1216 12:58:01.023836 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:01.024890 kubelet[2968]: E1216 12:58:01.023850 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:01.024890 kubelet[2968]: E1216 12:58:01.024150 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:01.024890 kubelet[2968]: W1216 12:58:01.024165 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:01.024890 kubelet[2968]: E1216 12:58:01.024179 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:01.024890 kubelet[2968]: E1216 12:58:01.024429 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:01.024890 kubelet[2968]: W1216 12:58:01.024442 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:01.025426 kubelet[2968]: E1216 12:58:01.024456 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:01.025426 kubelet[2968]: E1216 12:58:01.024741 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:01.025426 kubelet[2968]: W1216 12:58:01.024755 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:01.025426 kubelet[2968]: E1216 12:58:01.024770 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:01.026525 kubelet[2968]: E1216 12:58:01.026489 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:01.026525 kubelet[2968]: W1216 12:58:01.026515 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:01.026657 kubelet[2968]: E1216 12:58:01.026531 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:01.029205 kubelet[2968]: E1216 12:58:01.026811 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:01.029205 kubelet[2968]: W1216 12:58:01.026832 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:01.029205 kubelet[2968]: E1216 12:58:01.026847 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:01.029205 kubelet[2968]: E1216 12:58:01.027277 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:01.029205 kubelet[2968]: W1216 12:58:01.027298 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:01.029205 kubelet[2968]: E1216 12:58:01.027315 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:01.029205 kubelet[2968]: E1216 12:58:01.027669 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:01.029205 kubelet[2968]: W1216 12:58:01.027689 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:01.029205 kubelet[2968]: E1216 12:58:01.027705 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:01.029205 kubelet[2968]: E1216 12:58:01.028006 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:01.032417 kubelet[2968]: W1216 12:58:01.028040 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:01.032417 kubelet[2968]: E1216 12:58:01.028072 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:01.032417 kubelet[2968]: E1216 12:58:01.028360 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:01.032417 kubelet[2968]: W1216 12:58:01.028374 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:01.032417 kubelet[2968]: E1216 12:58:01.028396 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:01.032417 kubelet[2968]: E1216 12:58:01.028689 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:01.032417 kubelet[2968]: W1216 12:58:01.028703 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:01.032417 kubelet[2968]: E1216 12:58:01.028721 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:01.032417 kubelet[2968]: E1216 12:58:01.029110 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:01.032417 kubelet[2968]: W1216 12:58:01.029134 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:01.033352 kubelet[2968]: E1216 12:58:01.029151 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:01.033352 kubelet[2968]: E1216 12:58:01.029483 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:01.033352 kubelet[2968]: W1216 12:58:01.029498 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:01.033352 kubelet[2968]: E1216 12:58:01.029516 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:01.033352 kubelet[2968]: E1216 12:58:01.029861 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:01.033352 kubelet[2968]: W1216 12:58:01.029879 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:01.033352 kubelet[2968]: E1216 12:58:01.029896 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:01.033352 kubelet[2968]: E1216 12:58:01.032460 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:01.033352 kubelet[2968]: W1216 12:58:01.032485 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:01.033352 kubelet[2968]: E1216 12:58:01.032506 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:01.034033 kubelet[2968]: E1216 12:58:01.032932 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:01.034033 kubelet[2968]: W1216 12:58:01.032947 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:01.034033 kubelet[2968]: E1216 12:58:01.033010 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:01.034033 kubelet[2968]: E1216 12:58:01.033580 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:01.034033 kubelet[2968]: W1216 12:58:01.033595 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:01.034033 kubelet[2968]: E1216 12:58:01.033613 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:01.034033 kubelet[2968]: E1216 12:58:01.033926 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:01.034033 kubelet[2968]: W1216 12:58:01.033978 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:01.034033 kubelet[2968]: E1216 12:58:01.033997 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:01.034410 kubelet[2968]: E1216 12:58:01.034338 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:01.034410 kubelet[2968]: W1216 12:58:01.034363 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:01.034410 kubelet[2968]: E1216 12:58:01.034379 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:01.035313 kubelet[2968]: E1216 12:58:01.034713 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:01.035313 kubelet[2968]: W1216 12:58:01.034730 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:01.035313 kubelet[2968]: E1216 12:58:01.034745 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:01.035313 kubelet[2968]: E1216 12:58:01.035075 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:01.035313 kubelet[2968]: W1216 12:58:01.035090 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:01.035313 kubelet[2968]: E1216 12:58:01.035145 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:01.035697 kubelet[2968]: E1216 12:58:01.035646 2968 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:01.035697 kubelet[2968]: W1216 12:58:01.035661 2968 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:01.035697 kubelet[2968]: E1216 12:58:01.035676 2968 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:01.358430 containerd[1627]: time="2025-12-16T12:58:01.358338278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:01.360723 containerd[1627]: time="2025-12-16T12:58:01.360598999Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4442579" Dec 16 12:58:01.362302 containerd[1627]: time="2025-12-16T12:58:01.362209801Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:01.380409 containerd[1627]: time="2025-12-16T12:58:01.380323784Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:01.383796 containerd[1627]: time="2025-12-16T12:58:01.383453711Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.97535256s" Dec 16 12:58:01.383796 containerd[1627]: time="2025-12-16T12:58:01.383531307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 16 12:58:01.389508 containerd[1627]: time="2025-12-16T12:58:01.389451720Z" level=info msg="CreateContainer within sandbox \"b6a415226c248899a06730d4becfbc65bd173778f3d9e6485e427240d9892b84\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 12:58:01.405229 containerd[1627]: time="2025-12-16T12:58:01.404586144Z" level=info msg="Container af1ed134de2647118637ee405915c7960d56c0b58dc67c08a6669ec19b1fe588: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:58:01.408715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2910107365.mount: Deactivated successfully. Dec 16 12:58:01.431638 containerd[1627]: time="2025-12-16T12:58:01.431572858Z" level=info msg="CreateContainer within sandbox \"b6a415226c248899a06730d4becfbc65bd173778f3d9e6485e427240d9892b84\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"af1ed134de2647118637ee405915c7960d56c0b58dc67c08a6669ec19b1fe588\"" Dec 16 12:58:01.432839 containerd[1627]: time="2025-12-16T12:58:01.432799858Z" level=info msg="StartContainer for \"af1ed134de2647118637ee405915c7960d56c0b58dc67c08a6669ec19b1fe588\"" Dec 16 12:58:01.435118 containerd[1627]: time="2025-12-16T12:58:01.435080966Z" level=info msg="connecting to shim af1ed134de2647118637ee405915c7960d56c0b58dc67c08a6669ec19b1fe588" address="unix:///run/containerd/s/7c003c15831c55dc353407db50724fa49426a1ebaec6458ca788df9722daa9b3" protocol=ttrpc version=3 Dec 16 12:58:01.480265 systemd[1]: Started cri-containerd-af1ed134de2647118637ee405915c7960d56c0b58dc67c08a6669ec19b1fe588.scope - libcontainer container af1ed134de2647118637ee405915c7960d56c0b58dc67c08a6669ec19b1fe588. Dec 16 12:58:01.554000 audit: BPF prog-id=170 op=LOAD Dec 16 12:58:01.554000 audit[3666]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3502 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:01.554000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166316564313334646532363437313138363337656534303539313563 Dec 16 12:58:01.554000 audit: BPF prog-id=171 op=LOAD Dec 16 12:58:01.554000 audit[3666]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3502 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:01.554000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166316564313334646532363437313138363337656534303539313563 Dec 16 12:58:01.554000 audit: BPF prog-id=171 op=UNLOAD Dec 16 12:58:01.554000 audit[3666]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3502 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:01.554000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166316564313334646532363437313138363337656534303539313563 Dec 16 12:58:01.554000 audit: BPF prog-id=170 op=UNLOAD Dec 16 12:58:01.554000 audit[3666]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3502 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:01.554000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166316564313334646532363437313138363337656534303539313563 Dec 16 12:58:01.554000 audit: BPF prog-id=172 op=LOAD Dec 16 12:58:01.554000 audit[3666]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3502 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:01.554000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166316564313334646532363437313138363337656534303539313563 Dec 16 12:58:01.600342 containerd[1627]: time="2025-12-16T12:58:01.600183398Z" level=info msg="StartContainer for \"af1ed134de2647118637ee405915c7960d56c0b58dc67c08a6669ec19b1fe588\" returns successfully" Dec 16 12:58:01.630103 systemd[1]: cri-containerd-af1ed134de2647118637ee405915c7960d56c0b58dc67c08a6669ec19b1fe588.scope: Deactivated successfully. Dec 16 12:58:01.632000 audit: BPF prog-id=172 op=UNLOAD Dec 16 12:58:01.669729 containerd[1627]: time="2025-12-16T12:58:01.669640629Z" level=info msg="received container exit event container_id:\"af1ed134de2647118637ee405915c7960d56c0b58dc67c08a6669ec19b1fe588\" id:\"af1ed134de2647118637ee405915c7960d56c0b58dc67c08a6669ec19b1fe588\" pid:3678 exited_at:{seconds:1765889881 nanos:635305794}" Dec 16 12:58:01.719784 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af1ed134de2647118637ee405915c7960d56c0b58dc67c08a6669ec19b1fe588-rootfs.mount: Deactivated successfully. Dec 16 12:58:01.932931 containerd[1627]: time="2025-12-16T12:58:01.932680924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 12:58:01.960864 kubelet[2968]: I1216 12:58:01.960722 2968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7cb88554d-2l4qk" podStartSLOduration=4.170775571 podStartE2EDuration="7.960697921s" podCreationTimestamp="2025-12-16 12:57:54 +0000 UTC" firstStartedPulling="2025-12-16 12:57:55.616229094 +0000 UTC m=+26.198803096" lastFinishedPulling="2025-12-16 12:57:59.406151432 +0000 UTC m=+29.988725446" observedRunningTime="2025-12-16 12:57:59.96234706 +0000 UTC m=+30.544921079" watchObservedRunningTime="2025-12-16 12:58:01.960697921 +0000 UTC m=+32.543271936" Dec 16 12:58:02.266124 kubelet[2968]: I1216 12:58:02.265453 2968 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:58:02.336000 audit[3716]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3716 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:02.336000 audit[3716]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd6c1d5750 a2=0 a3=7ffd6c1d573c items=0 ppid=3129 pid=3716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:02.336000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:02.346000 audit[3716]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3716 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:02.346000 audit[3716]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd6c1d5750 a2=0 a3=7ffd6c1d573c items=0 ppid=3129 pid=3716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:02.346000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:02.730045 kubelet[2968]: E1216 12:58:02.729472 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-frm2n" podUID="ae2d98cb-e462-4622-a2ef-d1063c3df86a" Dec 16 12:58:04.728649 kubelet[2968]: E1216 12:58:04.728541 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-frm2n" podUID="ae2d98cb-e462-4622-a2ef-d1063c3df86a" Dec 16 12:58:06.730012 kubelet[2968]: E1216 12:58:06.729788 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-frm2n" podUID="ae2d98cb-e462-4622-a2ef-d1063c3df86a" Dec 16 12:58:08.732075 kubelet[2968]: E1216 12:58:08.732012 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-frm2n" podUID="ae2d98cb-e462-4622-a2ef-d1063c3df86a" Dec 16 12:58:09.220328 containerd[1627]: time="2025-12-16T12:58:09.220090893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:09.222822 containerd[1627]: time="2025-12-16T12:58:09.222720392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Dec 16 12:58:09.226090 containerd[1627]: time="2025-12-16T12:58:09.225133812Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:09.229005 containerd[1627]: time="2025-12-16T12:58:09.228882898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:09.229865 containerd[1627]: time="2025-12-16T12:58:09.229829081Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 7.297089379s" Dec 16 12:58:09.230224 containerd[1627]: time="2025-12-16T12:58:09.230194920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 16 12:58:09.254160 containerd[1627]: time="2025-12-16T12:58:09.254103135Z" level=info msg="CreateContainer within sandbox \"b6a415226c248899a06730d4becfbc65bd173778f3d9e6485e427240d9892b84\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 12:58:09.267870 containerd[1627]: time="2025-12-16T12:58:09.267132125Z" level=info msg="Container f1a7c0e82c02eb64ab23642cbb8a7d0e569d52540b34c0a16a55df58225a5516: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:58:09.280349 containerd[1627]: time="2025-12-16T12:58:09.280276156Z" level=info msg="CreateContainer within sandbox \"b6a415226c248899a06730d4becfbc65bd173778f3d9e6485e427240d9892b84\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f1a7c0e82c02eb64ab23642cbb8a7d0e569d52540b34c0a16a55df58225a5516\"" Dec 16 12:58:09.281436 containerd[1627]: time="2025-12-16T12:58:09.281360430Z" level=info msg="StartContainer for \"f1a7c0e82c02eb64ab23642cbb8a7d0e569d52540b34c0a16a55df58225a5516\"" Dec 16 12:58:09.284495 containerd[1627]: time="2025-12-16T12:58:09.284438729Z" level=info msg="connecting to shim f1a7c0e82c02eb64ab23642cbb8a7d0e569d52540b34c0a16a55df58225a5516" address="unix:///run/containerd/s/7c003c15831c55dc353407db50724fa49426a1ebaec6458ca788df9722daa9b3" protocol=ttrpc version=3 Dec 16 12:58:09.320208 systemd[1]: Started cri-containerd-f1a7c0e82c02eb64ab23642cbb8a7d0e569d52540b34c0a16a55df58225a5516.scope - libcontainer container f1a7c0e82c02eb64ab23642cbb8a7d0e569d52540b34c0a16a55df58225a5516. Dec 16 12:58:09.407060 kernel: kauditd_printk_skb: 34 callbacks suppressed Dec 16 12:58:09.408479 kernel: audit: type=1334 audit(1765889889.400:571): prog-id=173 op=LOAD Dec 16 12:58:09.408619 kernel: audit: type=1300 audit(1765889889.400:571): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3502 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.400000 audit: BPF prog-id=173 op=LOAD Dec 16 12:58:09.400000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3502 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631613763306538326330326562363461623233363432636262386137 Dec 16 12:58:09.423199 kernel: audit: type=1327 audit(1765889889.400:571): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631613763306538326330326562363461623233363432636262386137 Dec 16 12:58:09.400000 audit: BPF prog-id=174 op=LOAD Dec 16 12:58:09.426003 kernel: audit: type=1334 audit(1765889889.400:572): prog-id=174 op=LOAD Dec 16 12:58:09.400000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3502 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.432004 kernel: audit: type=1300 audit(1765889889.400:572): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3502 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631613763306538326330326562363461623233363432636262386137 Dec 16 12:58:09.441001 kernel: audit: type=1327 audit(1765889889.400:572): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631613763306538326330326562363461623233363432636262386137 Dec 16 12:58:09.441132 kernel: audit: type=1334 audit(1765889889.400:573): prog-id=174 op=UNLOAD Dec 16 12:58:09.400000 audit: BPF prog-id=174 op=UNLOAD Dec 16 12:58:09.400000 audit[3728]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3502 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.444483 kernel: audit: type=1300 audit(1765889889.400:573): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3502 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631613763306538326330326562363461623233363432636262386137 Dec 16 12:58:09.454019 kernel: audit: type=1327 audit(1765889889.400:573): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631613763306538326330326562363461623233363432636262386137 Dec 16 12:58:09.454130 kernel: audit: type=1334 audit(1765889889.400:574): prog-id=173 op=UNLOAD Dec 16 12:58:09.400000 audit: BPF prog-id=173 op=UNLOAD Dec 16 12:58:09.400000 audit[3728]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3502 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631613763306538326330326562363461623233363432636262386137 Dec 16 12:58:09.400000 audit: BPF prog-id=175 op=LOAD Dec 16 12:58:09.400000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3502 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631613763306538326330326562363461623233363432636262386137 Dec 16 12:58:09.483800 containerd[1627]: time="2025-12-16T12:58:09.482331675Z" level=info msg="StartContainer for \"f1a7c0e82c02eb64ab23642cbb8a7d0e569d52540b34c0a16a55df58225a5516\" returns successfully" Dec 16 12:58:10.586638 systemd[1]: cri-containerd-f1a7c0e82c02eb64ab23642cbb8a7d0e569d52540b34c0a16a55df58225a5516.scope: Deactivated successfully. Dec 16 12:58:10.587897 systemd[1]: cri-containerd-f1a7c0e82c02eb64ab23642cbb8a7d0e569d52540b34c0a16a55df58225a5516.scope: Consumed 755ms CPU time, 153M memory peak, 7.9M read from disk, 171.3M written to disk. Dec 16 12:58:10.588000 audit: BPF prog-id=175 op=UNLOAD Dec 16 12:58:10.611631 containerd[1627]: time="2025-12-16T12:58:10.611298747Z" level=info msg="received container exit event container_id:\"f1a7c0e82c02eb64ab23642cbb8a7d0e569d52540b34c0a16a55df58225a5516\" id:\"f1a7c0e82c02eb64ab23642cbb8a7d0e569d52540b34c0a16a55df58225a5516\" pid:3742 exited_at:{seconds:1765889890 nanos:610948619}" Dec 16 12:58:10.683700 kubelet[2968]: I1216 12:58:10.683655 2968 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 12:58:10.685019 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1a7c0e82c02eb64ab23642cbb8a7d0e569d52540b34c0a16a55df58225a5516-rootfs.mount: Deactivated successfully. Dec 16 12:58:10.749932 systemd[1]: Created slice kubepods-besteffort-podae2d98cb_e462_4622_a2ef_d1063c3df86a.slice - libcontainer container kubepods-besteffort-podae2d98cb_e462_4622_a2ef_d1063c3df86a.slice. Dec 16 12:58:10.759041 containerd[1627]: time="2025-12-16T12:58:10.758826837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-frm2n,Uid:ae2d98cb-e462-4622-a2ef-d1063c3df86a,Namespace:calico-system,Attempt:0,}" Dec 16 12:58:10.772252 systemd[1]: Created slice kubepods-burstable-pod800738c6_a41d_474f_b72d_35aa420d6fcf.slice - libcontainer container kubepods-burstable-pod800738c6_a41d_474f_b72d_35aa420d6fcf.slice. Dec 16 12:58:10.804611 systemd[1]: Created slice kubepods-besteffort-podf26c7895_de48_48b9_98b3_5ed0a263683c.slice - libcontainer container kubepods-besteffort-podf26c7895_de48_48b9_98b3_5ed0a263683c.slice. Dec 16 12:58:10.828434 systemd[1]: Created slice kubepods-besteffort-pod612befca_b93d_4468_b0c5_1d17cad065aa.slice - libcontainer container kubepods-besteffort-pod612befca_b93d_4468_b0c5_1d17cad065aa.slice. Dec 16 12:58:10.843635 systemd[1]: Created slice kubepods-besteffort-pod5f28fd1d_daa3_4b1a_9808_93af3076e192.slice - libcontainer container kubepods-besteffort-pod5f28fd1d_daa3_4b1a_9808_93af3076e192.slice. Dec 16 12:58:10.859337 systemd[1]: Created slice kubepods-burstable-podd0482416_431b_468e_8a2a_835e52be2ad8.slice - libcontainer container kubepods-burstable-podd0482416_431b_468e_8a2a_835e52be2ad8.slice. Dec 16 12:58:10.884617 systemd[1]: Created slice kubepods-besteffort-poda122fec8_3bd1_40c2_adc0_e683491dabc7.slice - libcontainer container kubepods-besteffort-poda122fec8_3bd1_40c2_adc0_e683491dabc7.slice. Dec 16 12:58:10.910275 systemd[1]: Created slice kubepods-besteffort-pod654be68d_8474_4290_8738_6c95ee33b1c3.slice - libcontainer container kubepods-besteffort-pod654be68d_8474_4290_8738_6c95ee33b1c3.slice. Dec 16 12:58:10.912364 kubelet[2968]: I1216 12:58:10.912252 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/800738c6-a41d-474f-b72d-35aa420d6fcf-config-volume\") pod \"coredns-674b8bbfcf-5rnz2\" (UID: \"800738c6-a41d-474f-b72d-35aa420d6fcf\") " pod="kube-system/coredns-674b8bbfcf-5rnz2" Dec 16 12:58:10.912894 kubelet[2968]: I1216 12:58:10.912701 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f26c7895-de48-48b9-98b3-5ed0a263683c-calico-apiserver-certs\") pod \"calico-apiserver-6c6f4459b6-4w99m\" (UID: \"f26c7895-de48-48b9-98b3-5ed0a263683c\") " pod="calico-apiserver/calico-apiserver-6c6f4459b6-4w99m" Dec 16 12:58:10.913555 kubelet[2968]: I1216 12:58:10.913365 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtqm6\" (UniqueName: \"kubernetes.io/projected/d0482416-431b-468e-8a2a-835e52be2ad8-kube-api-access-rtqm6\") pod \"coredns-674b8bbfcf-l84nt\" (UID: \"d0482416-431b-468e-8a2a-835e52be2ad8\") " pod="kube-system/coredns-674b8bbfcf-l84nt" Dec 16 12:58:10.914431 kubelet[2968]: I1216 12:58:10.914375 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/654be68d-8474-4290-8738-6c95ee33b1c3-tigera-ca-bundle\") pod \"calico-kube-controllers-7fb664895f-z7td5\" (UID: \"654be68d-8474-4290-8738-6c95ee33b1c3\") " pod="calico-system/calico-kube-controllers-7fb664895f-z7td5" Dec 16 12:58:10.916338 kubelet[2968]: I1216 12:58:10.914653 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzmwp\" (UniqueName: \"kubernetes.io/projected/f26c7895-de48-48b9-98b3-5ed0a263683c-kube-api-access-nzmwp\") pod \"calico-apiserver-6c6f4459b6-4w99m\" (UID: \"f26c7895-de48-48b9-98b3-5ed0a263683c\") " pod="calico-apiserver/calico-apiserver-6c6f4459b6-4w99m" Dec 16 12:58:10.916691 kubelet[2968]: I1216 12:58:10.916662 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5f28fd1d-daa3-4b1a-9808-93af3076e192-goldmane-key-pair\") pod \"goldmane-666569f655-hjfvx\" (UID: \"5f28fd1d-daa3-4b1a-9808-93af3076e192\") " pod="calico-system/goldmane-666569f655-hjfvx" Dec 16 12:58:10.919223 kubelet[2968]: I1216 12:58:10.917240 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d696\" (UniqueName: \"kubernetes.io/projected/5f28fd1d-daa3-4b1a-9808-93af3076e192-kube-api-access-8d696\") pod \"goldmane-666569f655-hjfvx\" (UID: \"5f28fd1d-daa3-4b1a-9808-93af3076e192\") " pod="calico-system/goldmane-666569f655-hjfvx" Dec 16 12:58:10.919223 kubelet[2968]: I1216 12:58:10.918417 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a122fec8-3bd1-40c2-adc0-e683491dabc7-whisker-backend-key-pair\") pod \"whisker-54c54f6c57-lhcg7\" (UID: \"a122fec8-3bd1-40c2-adc0-e683491dabc7\") " pod="calico-system/whisker-54c54f6c57-lhcg7" Dec 16 12:58:10.919223 kubelet[2968]: I1216 12:58:10.918453 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46424\" (UniqueName: \"kubernetes.io/projected/a122fec8-3bd1-40c2-adc0-e683491dabc7-kube-api-access-46424\") pod \"whisker-54c54f6c57-lhcg7\" (UID: \"a122fec8-3bd1-40c2-adc0-e683491dabc7\") " pod="calico-system/whisker-54c54f6c57-lhcg7" Dec 16 12:58:10.919223 kubelet[2968]: I1216 12:58:10.918489 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqwlc\" (UniqueName: \"kubernetes.io/projected/612befca-b93d-4468-b0c5-1d17cad065aa-kube-api-access-wqwlc\") pod \"calico-apiserver-6c6f4459b6-wgcgz\" (UID: \"612befca-b93d-4468-b0c5-1d17cad065aa\") " pod="calico-apiserver/calico-apiserver-6c6f4459b6-wgcgz" Dec 16 12:58:10.919223 kubelet[2968]: I1216 12:58:10.918523 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f28fd1d-daa3-4b1a-9808-93af3076e192-config\") pod \"goldmane-666569f655-hjfvx\" (UID: \"5f28fd1d-daa3-4b1a-9808-93af3076e192\") " pod="calico-system/goldmane-666569f655-hjfvx" Dec 16 12:58:10.920306 kubelet[2968]: I1216 12:58:10.918554 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f28fd1d-daa3-4b1a-9808-93af3076e192-goldmane-ca-bundle\") pod \"goldmane-666569f655-hjfvx\" (UID: \"5f28fd1d-daa3-4b1a-9808-93af3076e192\") " pod="calico-system/goldmane-666569f655-hjfvx" Dec 16 12:58:10.920306 kubelet[2968]: I1216 12:58:10.918609 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zzkd\" (UniqueName: \"kubernetes.io/projected/654be68d-8474-4290-8738-6c95ee33b1c3-kube-api-access-9zzkd\") pod \"calico-kube-controllers-7fb664895f-z7td5\" (UID: \"654be68d-8474-4290-8738-6c95ee33b1c3\") " pod="calico-system/calico-kube-controllers-7fb664895f-z7td5" Dec 16 12:58:10.920306 kubelet[2968]: I1216 12:58:10.918702 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0482416-431b-468e-8a2a-835e52be2ad8-config-volume\") pod \"coredns-674b8bbfcf-l84nt\" (UID: \"d0482416-431b-468e-8a2a-835e52be2ad8\") " pod="kube-system/coredns-674b8bbfcf-l84nt" Dec 16 12:58:10.920306 kubelet[2968]: I1216 12:58:10.918733 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a122fec8-3bd1-40c2-adc0-e683491dabc7-whisker-ca-bundle\") pod \"whisker-54c54f6c57-lhcg7\" (UID: \"a122fec8-3bd1-40c2-adc0-e683491dabc7\") " pod="calico-system/whisker-54c54f6c57-lhcg7" Dec 16 12:58:10.920306 kubelet[2968]: I1216 12:58:10.918760 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtztk\" (UniqueName: \"kubernetes.io/projected/800738c6-a41d-474f-b72d-35aa420d6fcf-kube-api-access-vtztk\") pod \"coredns-674b8bbfcf-5rnz2\" (UID: \"800738c6-a41d-474f-b72d-35aa420d6fcf\") " pod="kube-system/coredns-674b8bbfcf-5rnz2" Dec 16 12:58:10.920522 kubelet[2968]: I1216 12:58:10.918791 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/612befca-b93d-4468-b0c5-1d17cad065aa-calico-apiserver-certs\") pod \"calico-apiserver-6c6f4459b6-wgcgz\" (UID: \"612befca-b93d-4468-b0c5-1d17cad065aa\") " pod="calico-apiserver/calico-apiserver-6c6f4459b6-wgcgz" Dec 16 12:58:11.132401 containerd[1627]: time="2025-12-16T12:58:11.132231140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6f4459b6-4w99m,Uid:f26c7895-de48-48b9-98b3-5ed0a263683c,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:58:11.138618 containerd[1627]: time="2025-12-16T12:58:11.138580550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6f4459b6-wgcgz,Uid:612befca-b93d-4468-b0c5-1d17cad065aa,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:58:11.159977 containerd[1627]: time="2025-12-16T12:58:11.159897407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hjfvx,Uid:5f28fd1d-daa3-4b1a-9808-93af3076e192,Namespace:calico-system,Attempt:0,}" Dec 16 12:58:11.181518 containerd[1627]: time="2025-12-16T12:58:11.181132325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l84nt,Uid:d0482416-431b-468e-8a2a-835e52be2ad8,Namespace:kube-system,Attempt:0,}" Dec 16 12:58:11.186229 containerd[1627]: time="2025-12-16T12:58:11.186025248Z" level=error msg="Failed to destroy network for sandbox \"abf6070eaaa06d4cf0875fe1cf1bef70eaedcabe93424f8785abb79b214f6687\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:11.207934 containerd[1627]: time="2025-12-16T12:58:11.191644355Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-frm2n,Uid:ae2d98cb-e462-4622-a2ef-d1063c3df86a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"abf6070eaaa06d4cf0875fe1cf1bef70eaedcabe93424f8785abb79b214f6687\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:11.211240 kubelet[2968]: E1216 12:58:11.211189 2968 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abf6070eaaa06d4cf0875fe1cf1bef70eaedcabe93424f8785abb79b214f6687\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:11.211663 kubelet[2968]: E1216 12:58:11.211409 2968 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abf6070eaaa06d4cf0875fe1cf1bef70eaedcabe93424f8785abb79b214f6687\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-frm2n" Dec 16 12:58:11.211815 kubelet[2968]: E1216 12:58:11.211635 2968 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abf6070eaaa06d4cf0875fe1cf1bef70eaedcabe93424f8785abb79b214f6687\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-frm2n" Dec 16 12:58:11.212166 kubelet[2968]: E1216 12:58:11.212107 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-frm2n_calico-system(ae2d98cb-e462-4622-a2ef-d1063c3df86a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-frm2n_calico-system(ae2d98cb-e462-4622-a2ef-d1063c3df86a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"abf6070eaaa06d4cf0875fe1cf1bef70eaedcabe93424f8785abb79b214f6687\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-frm2n" podUID="ae2d98cb-e462-4622-a2ef-d1063c3df86a" Dec 16 12:58:11.222987 containerd[1627]: time="2025-12-16T12:58:11.197725436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54c54f6c57-lhcg7,Uid:a122fec8-3bd1-40c2-adc0-e683491dabc7,Namespace:calico-system,Attempt:0,}" Dec 16 12:58:11.233082 containerd[1627]: time="2025-12-16T12:58:11.232430270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fb664895f-z7td5,Uid:654be68d-8474-4290-8738-6c95ee33b1c3,Namespace:calico-system,Attempt:0,}" Dec 16 12:58:11.235403 containerd[1627]: time="2025-12-16T12:58:11.235204841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 12:58:11.394720 containerd[1627]: time="2025-12-16T12:58:11.394545909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5rnz2,Uid:800738c6-a41d-474f-b72d-35aa420d6fcf,Namespace:kube-system,Attempt:0,}" Dec 16 12:58:11.475777 containerd[1627]: time="2025-12-16T12:58:11.475597273Z" level=error msg="Failed to destroy network for sandbox \"5e81fdcba0d9dae648199f8b12b88d4160ec35aa9a263a3fbe4ba9cb93a566e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:11.481968 containerd[1627]: time="2025-12-16T12:58:11.481891664Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6f4459b6-4w99m,Uid:f26c7895-de48-48b9-98b3-5ed0a263683c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e81fdcba0d9dae648199f8b12b88d4160ec35aa9a263a3fbe4ba9cb93a566e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:11.482985 kubelet[2968]: E1216 12:58:11.482626 2968 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e81fdcba0d9dae648199f8b12b88d4160ec35aa9a263a3fbe4ba9cb93a566e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:11.482985 kubelet[2968]: E1216 12:58:11.482730 2968 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e81fdcba0d9dae648199f8b12b88d4160ec35aa9a263a3fbe4ba9cb93a566e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c6f4459b6-4w99m" Dec 16 12:58:11.482985 kubelet[2968]: E1216 12:58:11.482764 2968 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e81fdcba0d9dae648199f8b12b88d4160ec35aa9a263a3fbe4ba9cb93a566e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c6f4459b6-4w99m" Dec 16 12:58:11.483202 kubelet[2968]: E1216 12:58:11.482835 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c6f4459b6-4w99m_calico-apiserver(f26c7895-de48-48b9-98b3-5ed0a263683c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c6f4459b6-4w99m_calico-apiserver(f26c7895-de48-48b9-98b3-5ed0a263683c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e81fdcba0d9dae648199f8b12b88d4160ec35aa9a263a3fbe4ba9cb93a566e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c6f4459b6-4w99m" podUID="f26c7895-de48-48b9-98b3-5ed0a263683c" Dec 16 12:58:11.512031 containerd[1627]: time="2025-12-16T12:58:11.511941442Z" level=error msg="Failed to destroy network for sandbox \"edf0e505bc34801d9c2548e4868b3c71835feb8ff465a9e1bc25b6434b7cbf15\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:11.518632 containerd[1627]: time="2025-12-16T12:58:11.518289389Z" level=error msg="Failed to destroy network for sandbox \"6e54a3bacdf841b4144c6d9ec64a8458b5401790555f9ed8e77c82afcfe39c15\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:11.519553 containerd[1627]: time="2025-12-16T12:58:11.512788207Z" level=error msg="Failed to destroy network for sandbox \"3edcfeec139319553ad70f18a74ef634b6613638e203592b5cf10f9ace92cf29\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:11.537577 containerd[1627]: time="2025-12-16T12:58:11.537479223Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hjfvx,Uid:5f28fd1d-daa3-4b1a-9808-93af3076e192,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"edf0e505bc34801d9c2548e4868b3c71835feb8ff465a9e1bc25b6434b7cbf15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:11.538514 kubelet[2968]: E1216 12:58:11.538237 2968 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edf0e505bc34801d9c2548e4868b3c71835feb8ff465a9e1bc25b6434b7cbf15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:11.538716 kubelet[2968]: E1216 12:58:11.538481 2968 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edf0e505bc34801d9c2548e4868b3c71835feb8ff465a9e1bc25b6434b7cbf15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-hjfvx" Dec 16 12:58:11.538969 kubelet[2968]: E1216 12:58:11.538668 2968 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edf0e505bc34801d9c2548e4868b3c71835feb8ff465a9e1bc25b6434b7cbf15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-hjfvx" Dec 16 12:58:11.539175 kubelet[2968]: E1216 12:58:11.539133 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-hjfvx_calico-system(5f28fd1d-daa3-4b1a-9808-93af3076e192)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-hjfvx_calico-system(5f28fd1d-daa3-4b1a-9808-93af3076e192)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"edf0e505bc34801d9c2548e4868b3c71835feb8ff465a9e1bc25b6434b7cbf15\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-hjfvx" podUID="5f28fd1d-daa3-4b1a-9808-93af3076e192" Dec 16 12:58:11.546265 containerd[1627]: time="2025-12-16T12:58:11.546112803Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l84nt,Uid:d0482416-431b-468e-8a2a-835e52be2ad8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3edcfeec139319553ad70f18a74ef634b6613638e203592b5cf10f9ace92cf29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:11.546782 kubelet[2968]: E1216 12:58:11.546697 2968 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3edcfeec139319553ad70f18a74ef634b6613638e203592b5cf10f9ace92cf29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:11.546891 containerd[1627]: time="2025-12-16T12:58:11.546821947Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6f4459b6-wgcgz,Uid:612befca-b93d-4468-b0c5-1d17cad065aa,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e54a3bacdf841b4144c6d9ec64a8458b5401790555f9ed8e77c82afcfe39c15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:11.549423 kubelet[2968]: E1216 12:58:11.548047 2968 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e54a3bacdf841b4144c6d9ec64a8458b5401790555f9ed8e77c82afcfe39c15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:11.549423 kubelet[2968]: E1216 12:58:11.548117 2968 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e54a3bacdf841b4144c6d9ec64a8458b5401790555f9ed8e77c82afcfe39c15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c6f4459b6-wgcgz" Dec 16 12:58:11.549423 kubelet[2968]: E1216 12:58:11.548166 2968 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e54a3bacdf841b4144c6d9ec64a8458b5401790555f9ed8e77c82afcfe39c15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c6f4459b6-wgcgz" Dec 16 12:58:11.549725 kubelet[2968]: E1216 12:58:11.548245 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c6f4459b6-wgcgz_calico-apiserver(612befca-b93d-4468-b0c5-1d17cad065aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c6f4459b6-wgcgz_calico-apiserver(612befca-b93d-4468-b0c5-1d17cad065aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e54a3bacdf841b4144c6d9ec64a8458b5401790555f9ed8e77c82afcfe39c15\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c6f4459b6-wgcgz" podUID="612befca-b93d-4468-b0c5-1d17cad065aa" Dec 16 12:58:11.549725 kubelet[2968]: E1216 12:58:11.548502 2968 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3edcfeec139319553ad70f18a74ef634b6613638e203592b5cf10f9ace92cf29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-l84nt" Dec 16 12:58:11.549725 kubelet[2968]: E1216 12:58:11.548535 2968 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3edcfeec139319553ad70f18a74ef634b6613638e203592b5cf10f9ace92cf29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-l84nt" Dec 16 12:58:11.550121 kubelet[2968]: E1216 12:58:11.549237 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-l84nt_kube-system(d0482416-431b-468e-8a2a-835e52be2ad8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-l84nt_kube-system(d0482416-431b-468e-8a2a-835e52be2ad8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3edcfeec139319553ad70f18a74ef634b6613638e203592b5cf10f9ace92cf29\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-l84nt" podUID="d0482416-431b-468e-8a2a-835e52be2ad8" Dec 16 12:58:11.570357 containerd[1627]: time="2025-12-16T12:58:11.570279295Z" level=error msg="Failed to destroy network for sandbox \"1b8e550b50b9dbaa5c4bc03f86b04ece2c8ca607196ab02050fb486e3d412ae7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:11.577949 containerd[1627]: time="2025-12-16T12:58:11.577875490Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fb664895f-z7td5,Uid:654be68d-8474-4290-8738-6c95ee33b1c3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b8e550b50b9dbaa5c4bc03f86b04ece2c8ca607196ab02050fb486e3d412ae7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:11.578708 kubelet[2968]: E1216 12:58:11.578501 2968 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b8e550b50b9dbaa5c4bc03f86b04ece2c8ca607196ab02050fb486e3d412ae7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:11.578708 kubelet[2968]: E1216 12:58:11.578583 2968 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b8e550b50b9dbaa5c4bc03f86b04ece2c8ca607196ab02050fb486e3d412ae7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fb664895f-z7td5" Dec 16 12:58:11.578708 kubelet[2968]: E1216 12:58:11.578640 2968 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b8e550b50b9dbaa5c4bc03f86b04ece2c8ca607196ab02050fb486e3d412ae7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fb664895f-z7td5" Dec 16 12:58:11.578902 containerd[1627]: time="2025-12-16T12:58:11.578575708Z" level=error msg="Failed to destroy network for sandbox \"075ff98878db8b0c272a044288c09d1a5320275f5a5d2fc8afc15b86e357c5ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:11.579436 kubelet[2968]: E1216 12:58:11.579365 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7fb664895f-z7td5_calico-system(654be68d-8474-4290-8738-6c95ee33b1c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7fb664895f-z7td5_calico-system(654be68d-8474-4290-8738-6c95ee33b1c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b8e550b50b9dbaa5c4bc03f86b04ece2c8ca607196ab02050fb486e3d412ae7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7fb664895f-z7td5" podUID="654be68d-8474-4290-8738-6c95ee33b1c3" Dec 16 12:58:11.581365 containerd[1627]: time="2025-12-16T12:58:11.581303976Z" level=error msg="Failed to destroy network for sandbox \"b4d628d754d42d65e75826a3979fa4f8b53f79e0ace86a82354d79e797438cc2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:11.584809 containerd[1627]: time="2025-12-16T12:58:11.584742943Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54c54f6c57-lhcg7,Uid:a122fec8-3bd1-40c2-adc0-e683491dabc7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"075ff98878db8b0c272a044288c09d1a5320275f5a5d2fc8afc15b86e357c5ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:11.585685 kubelet[2968]: E1216 12:58:11.585523 2968 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"075ff98878db8b0c272a044288c09d1a5320275f5a5d2fc8afc15b86e357c5ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:11.586109 kubelet[2968]: E1216 12:58:11.585777 2968 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"075ff98878db8b0c272a044288c09d1a5320275f5a5d2fc8afc15b86e357c5ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54c54f6c57-lhcg7" Dec 16 12:58:11.586716 kubelet[2968]: E1216 12:58:11.586008 2968 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"075ff98878db8b0c272a044288c09d1a5320275f5a5d2fc8afc15b86e357c5ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54c54f6c57-lhcg7" Dec 16 12:58:11.587064 kubelet[2968]: E1216 12:58:11.586828 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-54c54f6c57-lhcg7_calico-system(a122fec8-3bd1-40c2-adc0-e683491dabc7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-54c54f6c57-lhcg7_calico-system(a122fec8-3bd1-40c2-adc0-e683491dabc7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"075ff98878db8b0c272a044288c09d1a5320275f5a5d2fc8afc15b86e357c5ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54c54f6c57-lhcg7" podUID="a122fec8-3bd1-40c2-adc0-e683491dabc7" Dec 16 12:58:11.587815 containerd[1627]: time="2025-12-16T12:58:11.587608875Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5rnz2,Uid:800738c6-a41d-474f-b72d-35aa420d6fcf,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4d628d754d42d65e75826a3979fa4f8b53f79e0ace86a82354d79e797438cc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:11.588135 kubelet[2968]: E1216 12:58:11.588061 2968 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4d628d754d42d65e75826a3979fa4f8b53f79e0ace86a82354d79e797438cc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:11.588684 kubelet[2968]: E1216 12:58:11.588117 2968 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4d628d754d42d65e75826a3979fa4f8b53f79e0ace86a82354d79e797438cc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5rnz2" Dec 16 12:58:11.588684 kubelet[2968]: E1216 12:58:11.588384 2968 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4d628d754d42d65e75826a3979fa4f8b53f79e0ace86a82354d79e797438cc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5rnz2" Dec 16 12:58:11.589739 kubelet[2968]: E1216 12:58:11.588648 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-5rnz2_kube-system(800738c6-a41d-474f-b72d-35aa420d6fcf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-5rnz2_kube-system(800738c6-a41d-474f-b72d-35aa420d6fcf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4d628d754d42d65e75826a3979fa4f8b53f79e0ace86a82354d79e797438cc2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-5rnz2" podUID="800738c6-a41d-474f-b72d-35aa420d6fcf" Dec 16 12:58:11.693396 systemd[1]: run-netns-cni\x2ddbe823e6\x2db273\x2dec12\x2d69fe\x2d17b2b74230b6.mount: Deactivated successfully. Dec 16 12:58:22.733328 containerd[1627]: time="2025-12-16T12:58:22.732772700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5rnz2,Uid:800738c6-a41d-474f-b72d-35aa420d6fcf,Namespace:kube-system,Attempt:0,}" Dec 16 12:58:22.743246 containerd[1627]: time="2025-12-16T12:58:22.742611391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hjfvx,Uid:5f28fd1d-daa3-4b1a-9808-93af3076e192,Namespace:calico-system,Attempt:0,}" Dec 16 12:58:22.746116 containerd[1627]: time="2025-12-16T12:58:22.743840825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fb664895f-z7td5,Uid:654be68d-8474-4290-8738-6c95ee33b1c3,Namespace:calico-system,Attempt:0,}" Dec 16 12:58:22.746116 containerd[1627]: time="2025-12-16T12:58:22.744566996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54c54f6c57-lhcg7,Uid:a122fec8-3bd1-40c2-adc0-e683491dabc7,Namespace:calico-system,Attempt:0,}" Dec 16 12:58:22.746116 containerd[1627]: time="2025-12-16T12:58:22.744700053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6f4459b6-wgcgz,Uid:612befca-b93d-4468-b0c5-1d17cad065aa,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:58:23.124197 containerd[1627]: time="2025-12-16T12:58:23.124109636Z" level=error msg="Failed to destroy network for sandbox \"fcdd30dbaf674247f4efd7807170aa8af161b31dea42360e0342237eceddf13d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:23.128276 containerd[1627]: time="2025-12-16T12:58:23.128203234Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6f4459b6-wgcgz,Uid:612befca-b93d-4468-b0c5-1d17cad065aa,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcdd30dbaf674247f4efd7807170aa8af161b31dea42360e0342237eceddf13d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:23.128792 containerd[1627]: time="2025-12-16T12:58:23.128756636Z" level=error msg="Failed to destroy network for sandbox \"661b5f083e86cb52a6f8361b2b3f9477852fe06e14773019b739ee04ee56066d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:23.132402 containerd[1627]: time="2025-12-16T12:58:23.132349205Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fb664895f-z7td5,Uid:654be68d-8474-4290-8738-6c95ee33b1c3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"661b5f083e86cb52a6f8361b2b3f9477852fe06e14773019b739ee04ee56066d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:23.133160 kubelet[2968]: E1216 12:58:23.133098 2968 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"661b5f083e86cb52a6f8361b2b3f9477852fe06e14773019b739ee04ee56066d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:23.134379 kubelet[2968]: E1216 12:58:23.133206 2968 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"661b5f083e86cb52a6f8361b2b3f9477852fe06e14773019b739ee04ee56066d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fb664895f-z7td5" Dec 16 12:58:23.134379 kubelet[2968]: E1216 12:58:23.133241 2968 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"661b5f083e86cb52a6f8361b2b3f9477852fe06e14773019b739ee04ee56066d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fb664895f-z7td5" Dec 16 12:58:23.134379 kubelet[2968]: E1216 12:58:23.133335 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7fb664895f-z7td5_calico-system(654be68d-8474-4290-8738-6c95ee33b1c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7fb664895f-z7td5_calico-system(654be68d-8474-4290-8738-6c95ee33b1c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"661b5f083e86cb52a6f8361b2b3f9477852fe06e14773019b739ee04ee56066d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7fb664895f-z7td5" podUID="654be68d-8474-4290-8738-6c95ee33b1c3" Dec 16 12:58:23.135050 kubelet[2968]: E1216 12:58:23.134128 2968 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcdd30dbaf674247f4efd7807170aa8af161b31dea42360e0342237eceddf13d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:23.135050 kubelet[2968]: E1216 12:58:23.134187 2968 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcdd30dbaf674247f4efd7807170aa8af161b31dea42360e0342237eceddf13d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c6f4459b6-wgcgz" Dec 16 12:58:23.135050 kubelet[2968]: E1216 12:58:23.134213 2968 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcdd30dbaf674247f4efd7807170aa8af161b31dea42360e0342237eceddf13d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c6f4459b6-wgcgz" Dec 16 12:58:23.135914 kubelet[2968]: E1216 12:58:23.134270 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c6f4459b6-wgcgz_calico-apiserver(612befca-b93d-4468-b0c5-1d17cad065aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c6f4459b6-wgcgz_calico-apiserver(612befca-b93d-4468-b0c5-1d17cad065aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fcdd30dbaf674247f4efd7807170aa8af161b31dea42360e0342237eceddf13d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c6f4459b6-wgcgz" podUID="612befca-b93d-4468-b0c5-1d17cad065aa" Dec 16 12:58:23.176422 containerd[1627]: time="2025-12-16T12:58:23.176238756Z" level=error msg="Failed to destroy network for sandbox \"76bbe6012d67a4c326e5a447eeab07f85dd9a715df49c15ca5603812cbcf1d66\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:23.179681 containerd[1627]: time="2025-12-16T12:58:23.179416308Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54c54f6c57-lhcg7,Uid:a122fec8-3bd1-40c2-adc0-e683491dabc7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"76bbe6012d67a4c326e5a447eeab07f85dd9a715df49c15ca5603812cbcf1d66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:23.196992 kubelet[2968]: E1216 12:58:23.196717 2968 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76bbe6012d67a4c326e5a447eeab07f85dd9a715df49c15ca5603812cbcf1d66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:23.196992 kubelet[2968]: E1216 12:58:23.196826 2968 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76bbe6012d67a4c326e5a447eeab07f85dd9a715df49c15ca5603812cbcf1d66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54c54f6c57-lhcg7" Dec 16 12:58:23.196992 kubelet[2968]: E1216 12:58:23.196859 2968 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76bbe6012d67a4c326e5a447eeab07f85dd9a715df49c15ca5603812cbcf1d66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54c54f6c57-lhcg7" Dec 16 12:58:23.200002 kubelet[2968]: E1216 12:58:23.196932 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-54c54f6c57-lhcg7_calico-system(a122fec8-3bd1-40c2-adc0-e683491dabc7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-54c54f6c57-lhcg7_calico-system(a122fec8-3bd1-40c2-adc0-e683491dabc7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76bbe6012d67a4c326e5a447eeab07f85dd9a715df49c15ca5603812cbcf1d66\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54c54f6c57-lhcg7" podUID="a122fec8-3bd1-40c2-adc0-e683491dabc7" Dec 16 12:58:23.204132 containerd[1627]: time="2025-12-16T12:58:23.204073518Z" level=error msg="Failed to destroy network for sandbox \"ce861dc9086739be02368c689f64f3095467d362974e1dd7c29f3a20b71c2748\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:23.210194 containerd[1627]: time="2025-12-16T12:58:23.210126850Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hjfvx,Uid:5f28fd1d-daa3-4b1a-9808-93af3076e192,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce861dc9086739be02368c689f64f3095467d362974e1dd7c29f3a20b71c2748\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:23.212415 kubelet[2968]: E1216 12:58:23.212218 2968 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce861dc9086739be02368c689f64f3095467d362974e1dd7c29f3a20b71c2748\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:23.212415 kubelet[2968]: E1216 12:58:23.212320 2968 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce861dc9086739be02368c689f64f3095467d362974e1dd7c29f3a20b71c2748\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-hjfvx" Dec 16 12:58:23.212415 kubelet[2968]: E1216 12:58:23.212353 2968 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce861dc9086739be02368c689f64f3095467d362974e1dd7c29f3a20b71c2748\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-hjfvx" Dec 16 12:58:23.212637 kubelet[2968]: E1216 12:58:23.212423 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-hjfvx_calico-system(5f28fd1d-daa3-4b1a-9808-93af3076e192)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-hjfvx_calico-system(5f28fd1d-daa3-4b1a-9808-93af3076e192)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce861dc9086739be02368c689f64f3095467d362974e1dd7c29f3a20b71c2748\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-hjfvx" podUID="5f28fd1d-daa3-4b1a-9808-93af3076e192" Dec 16 12:58:23.218577 containerd[1627]: time="2025-12-16T12:58:23.218518509Z" level=error msg="Failed to destroy network for sandbox \"3393f7e80568b1b75a7b4c5198b369b50101f1fe1ce1deabafd78f5519fa4e8a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:23.314196 containerd[1627]: time="2025-12-16T12:58:23.313313424Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5rnz2,Uid:800738c6-a41d-474f-b72d-35aa420d6fcf,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3393f7e80568b1b75a7b4c5198b369b50101f1fe1ce1deabafd78f5519fa4e8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:23.315555 kubelet[2968]: E1216 12:58:23.315327 2968 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3393f7e80568b1b75a7b4c5198b369b50101f1fe1ce1deabafd78f5519fa4e8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:23.315688 kubelet[2968]: E1216 12:58:23.315606 2968 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3393f7e80568b1b75a7b4c5198b369b50101f1fe1ce1deabafd78f5519fa4e8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5rnz2" Dec 16 12:58:23.317003 kubelet[2968]: E1216 12:58:23.315676 2968 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3393f7e80568b1b75a7b4c5198b369b50101f1fe1ce1deabafd78f5519fa4e8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5rnz2" Dec 16 12:58:23.317003 kubelet[2968]: E1216 12:58:23.315998 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-5rnz2_kube-system(800738c6-a41d-474f-b72d-35aa420d6fcf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-5rnz2_kube-system(800738c6-a41d-474f-b72d-35aa420d6fcf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3393f7e80568b1b75a7b4c5198b369b50101f1fe1ce1deabafd78f5519fa4e8a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-5rnz2" podUID="800738c6-a41d-474f-b72d-35aa420d6fcf" Dec 16 12:58:23.731554 containerd[1627]: time="2025-12-16T12:58:23.731292922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l84nt,Uid:d0482416-431b-468e-8a2a-835e52be2ad8,Namespace:kube-system,Attempt:0,}" Dec 16 12:58:23.745285 systemd[1]: run-netns-cni\x2d8b70393a\x2d3647\x2deb50\x2d8ee5\x2d1e231aea79b6.mount: Deactivated successfully. Dec 16 12:58:23.745427 systemd[1]: run-netns-cni\x2d4408bf3f\x2d4943\x2db406\x2d63f7\x2dd7465aaef44f.mount: Deactivated successfully. Dec 16 12:58:23.745540 systemd[1]: run-netns-cni\x2df730b54a\x2df252\x2d5066\x2d9780\x2d49770da55c66.mount: Deactivated successfully. Dec 16 12:58:23.745645 systemd[1]: run-netns-cni\x2de6ecee56\x2d471f\x2d0512\x2d05b3\x2d49534f1e0e87.mount: Deactivated successfully. Dec 16 12:58:23.745763 systemd[1]: run-netns-cni\x2dd0fc1709\x2d77c8\x2d8df3\x2d5992\x2d1b27479c114d.mount: Deactivated successfully. Dec 16 12:58:23.861548 containerd[1627]: time="2025-12-16T12:58:23.861483846Z" level=error msg="Failed to destroy network for sandbox \"6bbbd8bf6ca8f05827b599fde072530d06625390cad6096f0177c101ee1dc386\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:23.865708 containerd[1627]: time="2025-12-16T12:58:23.865648742Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l84nt,Uid:d0482416-431b-468e-8a2a-835e52be2ad8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bbbd8bf6ca8f05827b599fde072530d06625390cad6096f0177c101ee1dc386\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:23.866013 systemd[1]: run-netns-cni\x2d72a68e6c\x2d9ef9\x2d01cd\x2d280c\x2da304f6f7fca8.mount: Deactivated successfully. Dec 16 12:58:23.867529 kubelet[2968]: E1216 12:58:23.867101 2968 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bbbd8bf6ca8f05827b599fde072530d06625390cad6096f0177c101ee1dc386\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:23.867529 kubelet[2968]: E1216 12:58:23.867325 2968 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bbbd8bf6ca8f05827b599fde072530d06625390cad6096f0177c101ee1dc386\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-l84nt" Dec 16 12:58:23.867529 kubelet[2968]: E1216 12:58:23.867368 2968 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bbbd8bf6ca8f05827b599fde072530d06625390cad6096f0177c101ee1dc386\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-l84nt" Dec 16 12:58:23.868897 kubelet[2968]: E1216 12:58:23.867602 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-l84nt_kube-system(d0482416-431b-468e-8a2a-835e52be2ad8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-l84nt_kube-system(d0482416-431b-468e-8a2a-835e52be2ad8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6bbbd8bf6ca8f05827b599fde072530d06625390cad6096f0177c101ee1dc386\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-l84nt" podUID="d0482416-431b-468e-8a2a-835e52be2ad8" Dec 16 12:58:24.313748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1700379329.mount: Deactivated successfully. Dec 16 12:58:24.364419 containerd[1627]: time="2025-12-16T12:58:24.364350495Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:24.397021 containerd[1627]: time="2025-12-16T12:58:24.396510019Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Dec 16 12:58:24.407828 containerd[1627]: time="2025-12-16T12:58:24.406748887Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:24.420181 containerd[1627]: time="2025-12-16T12:58:24.420107836Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:24.421130 containerd[1627]: time="2025-12-16T12:58:24.421087426Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 13.185785864s" Dec 16 12:58:24.421223 containerd[1627]: time="2025-12-16T12:58:24.421140687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 16 12:58:24.492318 containerd[1627]: time="2025-12-16T12:58:24.492259568Z" level=info msg="CreateContainer within sandbox \"b6a415226c248899a06730d4becfbc65bd173778f3d9e6485e427240d9892b84\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 12:58:24.597008 containerd[1627]: time="2025-12-16T12:58:24.596912767Z" level=info msg="Container 87dfece8d0b496ca104a8f9d0c68afb6206e6c607a7e45f999829ac2d3e1bfec: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:58:24.661820 containerd[1627]: time="2025-12-16T12:58:24.661743901Z" level=info msg="CreateContainer within sandbox \"b6a415226c248899a06730d4becfbc65bd173778f3d9e6485e427240d9892b84\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"87dfece8d0b496ca104a8f9d0c68afb6206e6c607a7e45f999829ac2d3e1bfec\"" Dec 16 12:58:24.662900 containerd[1627]: time="2025-12-16T12:58:24.662855538Z" level=info msg="StartContainer for \"87dfece8d0b496ca104a8f9d0c68afb6206e6c607a7e45f999829ac2d3e1bfec\"" Dec 16 12:58:24.668604 containerd[1627]: time="2025-12-16T12:58:24.668548647Z" level=info msg="connecting to shim 87dfece8d0b496ca104a8f9d0c68afb6206e6c607a7e45f999829ac2d3e1bfec" address="unix:///run/containerd/s/7c003c15831c55dc353407db50724fa49426a1ebaec6458ca788df9722daa9b3" protocol=ttrpc version=3 Dec 16 12:58:24.864856 systemd[1]: Started cri-containerd-87dfece8d0b496ca104a8f9d0c68afb6206e6c607a7e45f999829ac2d3e1bfec.scope - libcontainer container 87dfece8d0b496ca104a8f9d0c68afb6206e6c607a7e45f999829ac2d3e1bfec. Dec 16 12:58:24.976000 audit: BPF prog-id=176 op=LOAD Dec 16 12:58:24.983596 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 16 12:58:24.987374 kernel: audit: type=1334 audit(1765889904.976:577): prog-id=176 op=LOAD Dec 16 12:58:24.987457 kernel: audit: type=1300 audit(1765889904.976:577): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00019e488 a2=98 a3=0 items=0 ppid=3502 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:24.976000 audit[4154]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00019e488 a2=98 a3=0 items=0 ppid=3502 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:24.976000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837646665636538643062343936636131303461386639643063363861 Dec 16 12:58:24.993272 kernel: audit: type=1327 audit(1765889904.976:577): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837646665636538643062343936636131303461386639643063363861 Dec 16 12:58:24.983000 audit: BPF prog-id=177 op=LOAD Dec 16 12:58:24.997608 kernel: audit: type=1334 audit(1765889904.983:578): prog-id=177 op=LOAD Dec 16 12:58:24.983000 audit[4154]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00019e218 a2=98 a3=0 items=0 ppid=3502 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:24.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837646665636538643062343936636131303461386639643063363861 Dec 16 12:58:25.010533 kernel: audit: type=1300 audit(1765889904.983:578): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00019e218 a2=98 a3=0 items=0 ppid=3502 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:25.010814 kernel: audit: type=1327 audit(1765889904.983:578): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837646665636538643062343936636131303461386639643063363861 Dec 16 12:58:24.983000 audit: BPF prog-id=177 op=UNLOAD Dec 16 12:58:25.013040 kernel: audit: type=1334 audit(1765889904.983:579): prog-id=177 op=UNLOAD Dec 16 12:58:24.983000 audit[4154]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3502 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:24.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837646665636538643062343936636131303461386639643063363861 Dec 16 12:58:25.022472 kernel: audit: type=1300 audit(1765889904.983:579): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3502 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:25.022631 kernel: audit: type=1327 audit(1765889904.983:579): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837646665636538643062343936636131303461386639643063363861 Dec 16 12:58:25.026674 kernel: audit: type=1334 audit(1765889904.983:580): prog-id=176 op=UNLOAD Dec 16 12:58:24.983000 audit: BPF prog-id=176 op=UNLOAD Dec 16 12:58:24.983000 audit[4154]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3502 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:24.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837646665636538643062343936636131303461386639643063363861 Dec 16 12:58:24.983000 audit: BPF prog-id=178 op=LOAD Dec 16 12:58:24.983000 audit[4154]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00019e6e8 a2=98 a3=0 items=0 ppid=3502 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:24.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837646665636538643062343936636131303461386639643063363861 Dec 16 12:58:25.100187 containerd[1627]: time="2025-12-16T12:58:25.099740952Z" level=info msg="StartContainer for \"87dfece8d0b496ca104a8f9d0c68afb6206e6c607a7e45f999829ac2d3e1bfec\" returns successfully" Dec 16 12:58:25.550175 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 12:58:25.550417 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 12:58:25.571441 kubelet[2968]: I1216 12:58:25.568706 2968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hpxl7" podStartSLOduration=1.785665745 podStartE2EDuration="30.565336921s" podCreationTimestamp="2025-12-16 12:57:55 +0000 UTC" firstStartedPulling="2025-12-16 12:57:55.643922914 +0000 UTC m=+26.226496916" lastFinishedPulling="2025-12-16 12:58:24.423594083 +0000 UTC m=+55.006168092" observedRunningTime="2025-12-16 12:58:25.52947714 +0000 UTC m=+56.112051158" watchObservedRunningTime="2025-12-16 12:58:25.565336921 +0000 UTC m=+56.147910952" Dec 16 12:58:25.740118 containerd[1627]: time="2025-12-16T12:58:25.739723485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6f4459b6-4w99m,Uid:f26c7895-de48-48b9-98b3-5ed0a263683c,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:58:25.925979 containerd[1627]: time="2025-12-16T12:58:25.925840609Z" level=error msg="Failed to destroy network for sandbox \"1f202b79351c5fe2be46d21d474c10ed374fa37f469aa8e64f043afa77a71fae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:25.930199 containerd[1627]: time="2025-12-16T12:58:25.930118334Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6f4459b6-4w99m,Uid:f26c7895-de48-48b9-98b3-5ed0a263683c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f202b79351c5fe2be46d21d474c10ed374fa37f469aa8e64f043afa77a71fae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:25.932580 kubelet[2968]: E1216 12:58:25.932501 2968 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f202b79351c5fe2be46d21d474c10ed374fa37f469aa8e64f043afa77a71fae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:25.932706 kubelet[2968]: E1216 12:58:25.932644 2968 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f202b79351c5fe2be46d21d474c10ed374fa37f469aa8e64f043afa77a71fae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c6f4459b6-4w99m" Dec 16 12:58:25.932785 kubelet[2968]: E1216 12:58:25.932695 2968 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f202b79351c5fe2be46d21d474c10ed374fa37f469aa8e64f043afa77a71fae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c6f4459b6-4w99m" Dec 16 12:58:25.934103 systemd[1]: run-netns-cni\x2d0a99cd83\x2d4dda\x2d777e\x2da5ab\x2dac717452f25c.mount: Deactivated successfully. Dec 16 12:58:25.943636 kubelet[2968]: E1216 12:58:25.943535 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c6f4459b6-4w99m_calico-apiserver(f26c7895-de48-48b9-98b3-5ed0a263683c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c6f4459b6-4w99m_calico-apiserver(f26c7895-de48-48b9-98b3-5ed0a263683c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f202b79351c5fe2be46d21d474c10ed374fa37f469aa8e64f043afa77a71fae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c6f4459b6-4w99m" podUID="f26c7895-de48-48b9-98b3-5ed0a263683c" Dec 16 12:58:26.263858 kubelet[2968]: I1216 12:58:26.263711 2968 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46424\" (UniqueName: \"kubernetes.io/projected/a122fec8-3bd1-40c2-adc0-e683491dabc7-kube-api-access-46424\") pod \"a122fec8-3bd1-40c2-adc0-e683491dabc7\" (UID: \"a122fec8-3bd1-40c2-adc0-e683491dabc7\") " Dec 16 12:58:26.263858 kubelet[2968]: I1216 12:58:26.263797 2968 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a122fec8-3bd1-40c2-adc0-e683491dabc7-whisker-ca-bundle\") pod \"a122fec8-3bd1-40c2-adc0-e683491dabc7\" (UID: \"a122fec8-3bd1-40c2-adc0-e683491dabc7\") " Dec 16 12:58:26.264116 kubelet[2968]: I1216 12:58:26.263866 2968 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a122fec8-3bd1-40c2-adc0-e683491dabc7-whisker-backend-key-pair\") pod \"a122fec8-3bd1-40c2-adc0-e683491dabc7\" (UID: \"a122fec8-3bd1-40c2-adc0-e683491dabc7\") " Dec 16 12:58:26.287601 kubelet[2968]: I1216 12:58:26.285329 2968 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a122fec8-3bd1-40c2-adc0-e683491dabc7-kube-api-access-46424" (OuterVolumeSpecName: "kube-api-access-46424") pod "a122fec8-3bd1-40c2-adc0-e683491dabc7" (UID: "a122fec8-3bd1-40c2-adc0-e683491dabc7"). InnerVolumeSpecName "kube-api-access-46424". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:58:26.286194 systemd[1]: var-lib-kubelet-pods-a122fec8\x2d3bd1\x2d40c2\x2dadc0\x2de683491dabc7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d46424.mount: Deactivated successfully. Dec 16 12:58:26.288417 kubelet[2968]: I1216 12:58:26.285553 2968 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a122fec8-3bd1-40c2-adc0-e683491dabc7-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a122fec8-3bd1-40c2-adc0-e683491dabc7" (UID: "a122fec8-3bd1-40c2-adc0-e683491dabc7"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:58:26.293830 systemd[1]: var-lib-kubelet-pods-a122fec8\x2d3bd1\x2d40c2\x2dadc0\x2de683491dabc7-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 12:58:26.295280 kubelet[2968]: I1216 12:58:26.295242 2968 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a122fec8-3bd1-40c2-adc0-e683491dabc7-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a122fec8-3bd1-40c2-adc0-e683491dabc7" (UID: "a122fec8-3bd1-40c2-adc0-e683491dabc7"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 12:58:26.366016 kubelet[2968]: I1216 12:58:26.365931 2968 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-46424\" (UniqueName: \"kubernetes.io/projected/a122fec8-3bd1-40c2-adc0-e683491dabc7-kube-api-access-46424\") on node \"srv-gtzk5.gb1.brightbox.com\" DevicePath \"\"" Dec 16 12:58:26.366378 kubelet[2968]: I1216 12:58:26.366354 2968 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a122fec8-3bd1-40c2-adc0-e683491dabc7-whisker-ca-bundle\") on node \"srv-gtzk5.gb1.brightbox.com\" DevicePath \"\"" Dec 16 12:58:26.366592 kubelet[2968]: I1216 12:58:26.366535 2968 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a122fec8-3bd1-40c2-adc0-e683491dabc7-whisker-backend-key-pair\") on node \"srv-gtzk5.gb1.brightbox.com\" DevicePath \"\"" Dec 16 12:58:26.466868 systemd[1]: Removed slice kubepods-besteffort-poda122fec8_3bd1_40c2_adc0_e683491dabc7.slice - libcontainer container kubepods-besteffort-poda122fec8_3bd1_40c2_adc0_e683491dabc7.slice. Dec 16 12:58:26.668460 systemd[1]: Created slice kubepods-besteffort-podfc47de84_c877_49ec_9b26_5c23204d879d.slice - libcontainer container kubepods-besteffort-podfc47de84_c877_49ec_9b26_5c23204d879d.slice. Dec 16 12:58:26.730346 containerd[1627]: time="2025-12-16T12:58:26.730260950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-frm2n,Uid:ae2d98cb-e462-4622-a2ef-d1063c3df86a,Namespace:calico-system,Attempt:0,}" Dec 16 12:58:26.772988 kubelet[2968]: I1216 12:58:26.772826 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fc47de84-c877-49ec-9b26-5c23204d879d-whisker-backend-key-pair\") pod \"whisker-7bfdd67657-r2jq9\" (UID: \"fc47de84-c877-49ec-9b26-5c23204d879d\") " pod="calico-system/whisker-7bfdd67657-r2jq9" Dec 16 12:58:26.772988 kubelet[2968]: I1216 12:58:26.772925 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc47de84-c877-49ec-9b26-5c23204d879d-whisker-ca-bundle\") pod \"whisker-7bfdd67657-r2jq9\" (UID: \"fc47de84-c877-49ec-9b26-5c23204d879d\") " pod="calico-system/whisker-7bfdd67657-r2jq9" Dec 16 12:58:26.772988 kubelet[2968]: I1216 12:58:26.772989 2968 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcxrs\" (UniqueName: \"kubernetes.io/projected/fc47de84-c877-49ec-9b26-5c23204d879d-kube-api-access-lcxrs\") pod \"whisker-7bfdd67657-r2jq9\" (UID: \"fc47de84-c877-49ec-9b26-5c23204d879d\") " pod="calico-system/whisker-7bfdd67657-r2jq9" Dec 16 12:58:26.975596 containerd[1627]: time="2025-12-16T12:58:26.975321893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bfdd67657-r2jq9,Uid:fc47de84-c877-49ec-9b26-5c23204d879d,Namespace:calico-system,Attempt:0,}" Dec 16 12:58:27.184329 systemd-networkd[1550]: calid0a706e2d6a: Link UP Dec 16 12:58:27.185457 systemd-networkd[1550]: calid0a706e2d6a: Gained carrier Dec 16 12:58:27.224985 containerd[1627]: 2025-12-16 12:58:26.771 [INFO][4291] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:58:27.224985 containerd[1627]: 2025-12-16 12:58:26.809 [INFO][4291] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gtzk5.gb1.brightbox.com-k8s-csi--node--driver--frm2n-eth0 csi-node-driver- calico-system ae2d98cb-e462-4622-a2ef-d1063c3df86a 748 0 2025-12-16 12:57:55 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-gtzk5.gb1.brightbox.com csi-node-driver-frm2n eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid0a706e2d6a [] [] }} ContainerID="7b0a3b393075a4bbadfb4dd05786c8c862b4ee04e7d74edaf783a513c60f3408" Namespace="calico-system" Pod="csi-node-driver-frm2n" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-csi--node--driver--frm2n-" Dec 16 12:58:27.224985 containerd[1627]: 2025-12-16 12:58:26.809 [INFO][4291] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7b0a3b393075a4bbadfb4dd05786c8c862b4ee04e7d74edaf783a513c60f3408" Namespace="calico-system" Pod="csi-node-driver-frm2n" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-csi--node--driver--frm2n-eth0" Dec 16 12:58:27.224985 containerd[1627]: 2025-12-16 12:58:27.058 [INFO][4302] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b0a3b393075a4bbadfb4dd05786c8c862b4ee04e7d74edaf783a513c60f3408" HandleID="k8s-pod-network.7b0a3b393075a4bbadfb4dd05786c8c862b4ee04e7d74edaf783a513c60f3408" Workload="srv--gtzk5.gb1.brightbox.com-k8s-csi--node--driver--frm2n-eth0" Dec 16 12:58:27.225338 containerd[1627]: 2025-12-16 12:58:27.061 [INFO][4302] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7b0a3b393075a4bbadfb4dd05786c8c862b4ee04e7d74edaf783a513c60f3408" HandleID="k8s-pod-network.7b0a3b393075a4bbadfb4dd05786c8c862b4ee04e7d74edaf783a513c60f3408" Workload="srv--gtzk5.gb1.brightbox.com-k8s-csi--node--driver--frm2n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000372d40), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-gtzk5.gb1.brightbox.com", "pod":"csi-node-driver-frm2n", "timestamp":"2025-12-16 12:58:27.057997835 +0000 UTC"}, Hostname:"srv-gtzk5.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:58:27.225338 containerd[1627]: 2025-12-16 12:58:27.061 [INFO][4302] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:58:27.225338 containerd[1627]: 2025-12-16 12:58:27.062 [INFO][4302] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:58:27.225338 containerd[1627]: 2025-12-16 12:58:27.063 [INFO][4302] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gtzk5.gb1.brightbox.com' Dec 16 12:58:27.225338 containerd[1627]: 2025-12-16 12:58:27.094 [INFO][4302] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7b0a3b393075a4bbadfb4dd05786c8c862b4ee04e7d74edaf783a513c60f3408" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:27.225338 containerd[1627]: 2025-12-16 12:58:27.111 [INFO][4302] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:27.225338 containerd[1627]: 2025-12-16 12:58:27.122 [INFO][4302] ipam/ipam.go 511: Trying affinity for 192.168.21.64/26 host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:27.225338 containerd[1627]: 2025-12-16 12:58:27.128 [INFO][4302] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.64/26 host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:27.225338 containerd[1627]: 2025-12-16 12:58:27.134 [INFO][4302] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.64/26 host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:27.225856 containerd[1627]: 2025-12-16 12:58:27.135 [INFO][4302] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.21.64/26 handle="k8s-pod-network.7b0a3b393075a4bbadfb4dd05786c8c862b4ee04e7d74edaf783a513c60f3408" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:27.225856 containerd[1627]: 2025-12-16 12:58:27.141 [INFO][4302] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7b0a3b393075a4bbadfb4dd05786c8c862b4ee04e7d74edaf783a513c60f3408 Dec 16 12:58:27.225856 containerd[1627]: 2025-12-16 12:58:27.148 [INFO][4302] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.21.64/26 handle="k8s-pod-network.7b0a3b393075a4bbadfb4dd05786c8c862b4ee04e7d74edaf783a513c60f3408" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:27.225856 containerd[1627]: 2025-12-16 12:58:27.157 [INFO][4302] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.21.65/26] block=192.168.21.64/26 handle="k8s-pod-network.7b0a3b393075a4bbadfb4dd05786c8c862b4ee04e7d74edaf783a513c60f3408" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:27.225856 containerd[1627]: 2025-12-16 12:58:27.158 [INFO][4302] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.65/26] handle="k8s-pod-network.7b0a3b393075a4bbadfb4dd05786c8c862b4ee04e7d74edaf783a513c60f3408" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:27.225856 containerd[1627]: 2025-12-16 12:58:27.158 [INFO][4302] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:58:27.225856 containerd[1627]: 2025-12-16 12:58:27.159 [INFO][4302] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.21.65/26] IPv6=[] ContainerID="7b0a3b393075a4bbadfb4dd05786c8c862b4ee04e7d74edaf783a513c60f3408" HandleID="k8s-pod-network.7b0a3b393075a4bbadfb4dd05786c8c862b4ee04e7d74edaf783a513c60f3408" Workload="srv--gtzk5.gb1.brightbox.com-k8s-csi--node--driver--frm2n-eth0" Dec 16 12:58:27.226195 containerd[1627]: 2025-12-16 12:58:27.164 [INFO][4291] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7b0a3b393075a4bbadfb4dd05786c8c862b4ee04e7d74edaf783a513c60f3408" Namespace="calico-system" Pod="csi-node-driver-frm2n" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-csi--node--driver--frm2n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gtzk5.gb1.brightbox.com-k8s-csi--node--driver--frm2n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ae2d98cb-e462-4622-a2ef-d1063c3df86a", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 57, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gtzk5.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-frm2n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.21.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid0a706e2d6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:58:27.226336 containerd[1627]: 2025-12-16 12:58:27.164 [INFO][4291] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.65/32] ContainerID="7b0a3b393075a4bbadfb4dd05786c8c862b4ee04e7d74edaf783a513c60f3408" Namespace="calico-system" Pod="csi-node-driver-frm2n" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-csi--node--driver--frm2n-eth0" Dec 16 12:58:27.226336 containerd[1627]: 2025-12-16 12:58:27.165 [INFO][4291] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid0a706e2d6a ContainerID="7b0a3b393075a4bbadfb4dd05786c8c862b4ee04e7d74edaf783a513c60f3408" Namespace="calico-system" Pod="csi-node-driver-frm2n" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-csi--node--driver--frm2n-eth0" Dec 16 12:58:27.226336 containerd[1627]: 2025-12-16 12:58:27.181 [INFO][4291] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7b0a3b393075a4bbadfb4dd05786c8c862b4ee04e7d74edaf783a513c60f3408" Namespace="calico-system" Pod="csi-node-driver-frm2n" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-csi--node--driver--frm2n-eth0" Dec 16 12:58:27.233150 containerd[1627]: 2025-12-16 12:58:27.183 [INFO][4291] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7b0a3b393075a4bbadfb4dd05786c8c862b4ee04e7d74edaf783a513c60f3408" Namespace="calico-system" Pod="csi-node-driver-frm2n" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-csi--node--driver--frm2n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gtzk5.gb1.brightbox.com-k8s-csi--node--driver--frm2n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ae2d98cb-e462-4622-a2ef-d1063c3df86a", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 57, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gtzk5.gb1.brightbox.com", ContainerID:"7b0a3b393075a4bbadfb4dd05786c8c862b4ee04e7d74edaf783a513c60f3408", Pod:"csi-node-driver-frm2n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.21.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid0a706e2d6a", MAC:"de:80:72:e2:5f:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:58:27.234733 containerd[1627]: 2025-12-16 12:58:27.202 [INFO][4291] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7b0a3b393075a4bbadfb4dd05786c8c862b4ee04e7d74edaf783a513c60f3408" Namespace="calico-system" Pod="csi-node-driver-frm2n" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-csi--node--driver--frm2n-eth0" Dec 16 12:58:27.322022 systemd-networkd[1550]: cali2439840b6b3: Link UP Dec 16 12:58:27.324537 systemd-networkd[1550]: cali2439840b6b3: Gained carrier Dec 16 12:58:27.354179 containerd[1627]: 2025-12-16 12:58:27.043 [INFO][4310] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:58:27.354179 containerd[1627]: 2025-12-16 12:58:27.064 [INFO][4310] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gtzk5.gb1.brightbox.com-k8s-whisker--7bfdd67657--r2jq9-eth0 whisker-7bfdd67657- calico-system fc47de84-c877-49ec-9b26-5c23204d879d 964 0 2025-12-16 12:58:26 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7bfdd67657 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-gtzk5.gb1.brightbox.com whisker-7bfdd67657-r2jq9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2439840b6b3 [] [] }} ContainerID="6aa5513161552b4fb178ec6e6c907be112fbabdbbe8518c127b8c647d760ed4b" Namespace="calico-system" Pod="whisker-7bfdd67657-r2jq9" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-whisker--7bfdd67657--r2jq9-" Dec 16 12:58:27.354179 containerd[1627]: 2025-12-16 12:58:27.064 [INFO][4310] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6aa5513161552b4fb178ec6e6c907be112fbabdbbe8518c127b8c647d760ed4b" Namespace="calico-system" Pod="whisker-7bfdd67657-r2jq9" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-whisker--7bfdd67657--r2jq9-eth0" Dec 16 12:58:27.354179 containerd[1627]: 2025-12-16 12:58:27.139 [INFO][4323] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6aa5513161552b4fb178ec6e6c907be112fbabdbbe8518c127b8c647d760ed4b" HandleID="k8s-pod-network.6aa5513161552b4fb178ec6e6c907be112fbabdbbe8518c127b8c647d760ed4b" Workload="srv--gtzk5.gb1.brightbox.com-k8s-whisker--7bfdd67657--r2jq9-eth0" Dec 16 12:58:27.354572 containerd[1627]: 2025-12-16 12:58:27.140 [INFO][4323] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6aa5513161552b4fb178ec6e6c907be112fbabdbbe8518c127b8c647d760ed4b" HandleID="k8s-pod-network.6aa5513161552b4fb178ec6e6c907be112fbabdbbe8518c127b8c647d760ed4b" Workload="srv--gtzk5.gb1.brightbox.com-k8s-whisker--7bfdd67657--r2jq9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032ee50), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-gtzk5.gb1.brightbox.com", "pod":"whisker-7bfdd67657-r2jq9", "timestamp":"2025-12-16 12:58:27.139885468 +0000 UTC"}, Hostname:"srv-gtzk5.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:58:27.354572 containerd[1627]: 2025-12-16 12:58:27.140 [INFO][4323] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:58:27.354572 containerd[1627]: 2025-12-16 12:58:27.158 [INFO][4323] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:58:27.354572 containerd[1627]: 2025-12-16 12:58:27.159 [INFO][4323] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gtzk5.gb1.brightbox.com' Dec 16 12:58:27.354572 containerd[1627]: 2025-12-16 12:58:27.209 [INFO][4323] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6aa5513161552b4fb178ec6e6c907be112fbabdbbe8518c127b8c647d760ed4b" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:27.354572 containerd[1627]: 2025-12-16 12:58:27.240 [INFO][4323] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:27.354572 containerd[1627]: 2025-12-16 12:58:27.257 [INFO][4323] ipam/ipam.go 511: Trying affinity for 192.168.21.64/26 host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:27.354572 containerd[1627]: 2025-12-16 12:58:27.265 [INFO][4323] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.64/26 host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:27.354572 containerd[1627]: 2025-12-16 12:58:27.279 [INFO][4323] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.64/26 host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:27.355472 containerd[1627]: 2025-12-16 12:58:27.280 [INFO][4323] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.21.64/26 handle="k8s-pod-network.6aa5513161552b4fb178ec6e6c907be112fbabdbbe8518c127b8c647d760ed4b" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:27.355472 containerd[1627]: 2025-12-16 12:58:27.282 [INFO][4323] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6aa5513161552b4fb178ec6e6c907be112fbabdbbe8518c127b8c647d760ed4b Dec 16 12:58:27.355472 containerd[1627]: 2025-12-16 12:58:27.290 [INFO][4323] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.21.64/26 handle="k8s-pod-network.6aa5513161552b4fb178ec6e6c907be112fbabdbbe8518c127b8c647d760ed4b" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:27.355472 containerd[1627]: 2025-12-16 12:58:27.303 [INFO][4323] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.21.66/26] block=192.168.21.64/26 handle="k8s-pod-network.6aa5513161552b4fb178ec6e6c907be112fbabdbbe8518c127b8c647d760ed4b" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:27.355472 containerd[1627]: 2025-12-16 12:58:27.303 [INFO][4323] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.66/26] handle="k8s-pod-network.6aa5513161552b4fb178ec6e6c907be112fbabdbbe8518c127b8c647d760ed4b" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:27.355472 containerd[1627]: 2025-12-16 12:58:27.303 [INFO][4323] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:58:27.355472 containerd[1627]: 2025-12-16 12:58:27.303 [INFO][4323] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.21.66/26] IPv6=[] ContainerID="6aa5513161552b4fb178ec6e6c907be112fbabdbbe8518c127b8c647d760ed4b" HandleID="k8s-pod-network.6aa5513161552b4fb178ec6e6c907be112fbabdbbe8518c127b8c647d760ed4b" Workload="srv--gtzk5.gb1.brightbox.com-k8s-whisker--7bfdd67657--r2jq9-eth0" Dec 16 12:58:27.355776 containerd[1627]: 2025-12-16 12:58:27.313 [INFO][4310] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6aa5513161552b4fb178ec6e6c907be112fbabdbbe8518c127b8c647d760ed4b" Namespace="calico-system" Pod="whisker-7bfdd67657-r2jq9" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-whisker--7bfdd67657--r2jq9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gtzk5.gb1.brightbox.com-k8s-whisker--7bfdd67657--r2jq9-eth0", GenerateName:"whisker-7bfdd67657-", Namespace:"calico-system", SelfLink:"", UID:"fc47de84-c877-49ec-9b26-5c23204d879d", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 58, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7bfdd67657", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gtzk5.gb1.brightbox.com", ContainerID:"", Pod:"whisker-7bfdd67657-r2jq9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.21.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2439840b6b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:58:27.355776 containerd[1627]: 2025-12-16 12:58:27.313 [INFO][4310] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.66/32] ContainerID="6aa5513161552b4fb178ec6e6c907be112fbabdbbe8518c127b8c647d760ed4b" Namespace="calico-system" Pod="whisker-7bfdd67657-r2jq9" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-whisker--7bfdd67657--r2jq9-eth0" Dec 16 12:58:27.357073 containerd[1627]: 2025-12-16 12:58:27.313 [INFO][4310] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2439840b6b3 ContainerID="6aa5513161552b4fb178ec6e6c907be112fbabdbbe8518c127b8c647d760ed4b" Namespace="calico-system" Pod="whisker-7bfdd67657-r2jq9" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-whisker--7bfdd67657--r2jq9-eth0" Dec 16 12:58:27.357073 containerd[1627]: 2025-12-16 12:58:27.325 [INFO][4310] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6aa5513161552b4fb178ec6e6c907be112fbabdbbe8518c127b8c647d760ed4b" Namespace="calico-system" Pod="whisker-7bfdd67657-r2jq9" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-whisker--7bfdd67657--r2jq9-eth0" Dec 16 12:58:27.357182 containerd[1627]: 2025-12-16 12:58:27.325 [INFO][4310] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6aa5513161552b4fb178ec6e6c907be112fbabdbbe8518c127b8c647d760ed4b" Namespace="calico-system" Pod="whisker-7bfdd67657-r2jq9" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-whisker--7bfdd67657--r2jq9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gtzk5.gb1.brightbox.com-k8s-whisker--7bfdd67657--r2jq9-eth0", GenerateName:"whisker-7bfdd67657-", Namespace:"calico-system", SelfLink:"", UID:"fc47de84-c877-49ec-9b26-5c23204d879d", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 58, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7bfdd67657", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gtzk5.gb1.brightbox.com", ContainerID:"6aa5513161552b4fb178ec6e6c907be112fbabdbbe8518c127b8c647d760ed4b", Pod:"whisker-7bfdd67657-r2jq9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.21.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2439840b6b3", MAC:"b2:6e:f8:79:e7:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:58:27.357294 containerd[1627]: 2025-12-16 12:58:27.345 [INFO][4310] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6aa5513161552b4fb178ec6e6c907be112fbabdbbe8518c127b8c647d760ed4b" Namespace="calico-system" Pod="whisker-7bfdd67657-r2jq9" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-whisker--7bfdd67657--r2jq9-eth0" Dec 16 12:58:27.465017 containerd[1627]: time="2025-12-16T12:58:27.464629173Z" level=info msg="connecting to shim 6aa5513161552b4fb178ec6e6c907be112fbabdbbe8518c127b8c647d760ed4b" address="unix:///run/containerd/s/92f60e152461254e5b07c95a837ef82c8d244f072fc14d160dd19749afa35ea3" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:58:27.469193 containerd[1627]: time="2025-12-16T12:58:27.469147973Z" level=info msg="connecting to shim 7b0a3b393075a4bbadfb4dd05786c8c862b4ee04e7d74edaf783a513c60f3408" address="unix:///run/containerd/s/79d36a263108f4ad91a17bd6297bdb3f72d1424c1986cf720177de119cd6ca85" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:58:27.518006 systemd[1]: Started cri-containerd-6aa5513161552b4fb178ec6e6c907be112fbabdbbe8518c127b8c647d760ed4b.scope - libcontainer container 6aa5513161552b4fb178ec6e6c907be112fbabdbbe8518c127b8c647d760ed4b. Dec 16 12:58:27.546275 systemd[1]: Started cri-containerd-7b0a3b393075a4bbadfb4dd05786c8c862b4ee04e7d74edaf783a513c60f3408.scope - libcontainer container 7b0a3b393075a4bbadfb4dd05786c8c862b4ee04e7d74edaf783a513c60f3408. Dec 16 12:58:27.596000 audit: BPF prog-id=179 op=LOAD Dec 16 12:58:27.597000 audit: BPF prog-id=180 op=LOAD Dec 16 12:58:27.597000 audit[4382]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4360 pid=4382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:27.597000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661613535313331363135353262346662313738656336653663393037 Dec 16 12:58:27.597000 audit: BPF prog-id=180 op=UNLOAD Dec 16 12:58:27.597000 audit[4382]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4360 pid=4382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:27.597000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661613535313331363135353262346662313738656336653663393037 Dec 16 12:58:27.597000 audit: BPF prog-id=181 op=LOAD Dec 16 12:58:27.597000 audit[4382]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4360 pid=4382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:27.597000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661613535313331363135353262346662313738656336653663393037 Dec 16 12:58:27.599000 audit: BPF prog-id=182 op=LOAD Dec 16 12:58:27.599000 audit[4382]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4360 pid=4382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:27.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661613535313331363135353262346662313738656336653663393037 Dec 16 12:58:27.599000 audit: BPF prog-id=182 op=UNLOAD Dec 16 12:58:27.599000 audit[4382]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4360 pid=4382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:27.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661613535313331363135353262346662313738656336653663393037 Dec 16 12:58:27.599000 audit: BPF prog-id=181 op=UNLOAD Dec 16 12:58:27.599000 audit[4382]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4360 pid=4382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:27.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661613535313331363135353262346662313738656336653663393037 Dec 16 12:58:27.599000 audit: BPF prog-id=183 op=LOAD Dec 16 12:58:27.599000 audit[4382]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4360 pid=4382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:27.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661613535313331363135353262346662313738656336653663393037 Dec 16 12:58:27.607000 audit: BPF prog-id=184 op=LOAD Dec 16 12:58:27.612000 audit: BPF prog-id=185 op=LOAD Dec 16 12:58:27.612000 audit[4407]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4362 pid=4407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:27.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762306133623339333037356134626261646662346464303537383663 Dec 16 12:58:27.613000 audit: BPF prog-id=185 op=UNLOAD Dec 16 12:58:27.613000 audit[4407]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4362 pid=4407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:27.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762306133623339333037356134626261646662346464303537383663 Dec 16 12:58:27.614000 audit: BPF prog-id=186 op=LOAD Dec 16 12:58:27.614000 audit[4407]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4362 pid=4407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:27.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762306133623339333037356134626261646662346464303537383663 Dec 16 12:58:27.615000 audit: BPF prog-id=187 op=LOAD Dec 16 12:58:27.615000 audit[4407]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4362 pid=4407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:27.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762306133623339333037356134626261646662346464303537383663 Dec 16 12:58:27.615000 audit: BPF prog-id=187 op=UNLOAD Dec 16 12:58:27.615000 audit[4407]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4362 pid=4407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:27.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762306133623339333037356134626261646662346464303537383663 Dec 16 12:58:27.615000 audit: BPF prog-id=186 op=UNLOAD Dec 16 12:58:27.615000 audit[4407]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4362 pid=4407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:27.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762306133623339333037356134626261646662346464303537383663 Dec 16 12:58:27.615000 audit: BPF prog-id=188 op=LOAD Dec 16 12:58:27.615000 audit[4407]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4362 pid=4407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:27.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762306133623339333037356134626261646662346464303537383663 Dec 16 12:58:27.658240 containerd[1627]: time="2025-12-16T12:58:27.658180245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-frm2n,Uid:ae2d98cb-e462-4622-a2ef-d1063c3df86a,Namespace:calico-system,Attempt:0,} returns sandbox id \"7b0a3b393075a4bbadfb4dd05786c8c862b4ee04e7d74edaf783a513c60f3408\"" Dec 16 12:58:27.681364 containerd[1627]: time="2025-12-16T12:58:27.681232115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:58:27.738910 kubelet[2968]: I1216 12:58:27.737856 2968 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a122fec8-3bd1-40c2-adc0-e683491dabc7" path="/var/lib/kubelet/pods/a122fec8-3bd1-40c2-adc0-e683491dabc7/volumes" Dec 16 12:58:27.755533 containerd[1627]: time="2025-12-16T12:58:27.755280101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bfdd67657-r2jq9,Uid:fc47de84-c877-49ec-9b26-5c23204d879d,Namespace:calico-system,Attempt:0,} returns sandbox id \"6aa5513161552b4fb178ec6e6c907be112fbabdbbe8518c127b8c647d760ed4b\"" Dec 16 12:58:28.060439 containerd[1627]: time="2025-12-16T12:58:28.060370941Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:58:28.061655 containerd[1627]: time="2025-12-16T12:58:28.061504245Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:58:28.061655 containerd[1627]: time="2025-12-16T12:58:28.061567690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:58:28.062515 kubelet[2968]: E1216 12:58:28.062460 2968 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:58:28.064841 kubelet[2968]: E1216 12:58:28.063013 2968 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:58:28.064927 containerd[1627]: time="2025-12-16T12:58:28.064172525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:58:28.110023 kubelet[2968]: E1216 12:58:28.109296 2968 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jrd7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-frm2n_calico-system(ae2d98cb-e462-4622-a2ef-d1063c3df86a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:58:28.410813 containerd[1627]: time="2025-12-16T12:58:28.410749319Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:58:28.411847 containerd[1627]: time="2025-12-16T12:58:28.411800728Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:58:28.415718 kubelet[2968]: E1216 12:58:28.415642 2968 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:58:28.416571 kubelet[2968]: E1216 12:58:28.416015 2968 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:58:28.418152 kubelet[2968]: E1216 12:58:28.417415 2968 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3b832fcc79f24e90b591efebd74820fd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lcxrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7bfdd67657-r2jq9_calico-system(fc47de84-c877-49ec-9b26-5c23204d879d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:58:28.430357 containerd[1627]: time="2025-12-16T12:58:28.411904048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:58:28.430743 containerd[1627]: time="2025-12-16T12:58:28.417058807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:58:28.509000 audit: BPF prog-id=189 op=LOAD Dec 16 12:58:28.509000 audit[4576]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff6a586900 a2=98 a3=1fffffffffffffff items=0 ppid=4456 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.509000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:58:28.509000 audit: BPF prog-id=189 op=UNLOAD Dec 16 12:58:28.509000 audit[4576]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff6a5868d0 a3=0 items=0 ppid=4456 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.509000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:58:28.510000 audit: BPF prog-id=190 op=LOAD Dec 16 12:58:28.510000 audit[4576]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff6a5867e0 a2=94 a3=3 items=0 ppid=4456 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.510000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:58:28.510000 audit: BPF prog-id=190 op=UNLOAD Dec 16 12:58:28.510000 audit[4576]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff6a5867e0 a2=94 a3=3 items=0 ppid=4456 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.510000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:58:28.510000 audit: BPF prog-id=191 op=LOAD Dec 16 12:58:28.510000 audit[4576]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff6a586820 a2=94 a3=7fff6a586a00 items=0 ppid=4456 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.510000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:58:28.510000 audit: BPF prog-id=191 op=UNLOAD Dec 16 12:58:28.510000 audit[4576]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff6a586820 a2=94 a3=7fff6a586a00 items=0 ppid=4456 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.510000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:58:28.513000 audit: BPF prog-id=192 op=LOAD Dec 16 12:58:28.513000 audit[4577]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc49900440 a2=98 a3=3 items=0 ppid=4456 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.513000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:28.513000 audit: BPF prog-id=192 op=UNLOAD Dec 16 12:58:28.513000 audit[4577]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc49900410 a3=0 items=0 ppid=4456 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.513000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:28.514000 audit: BPF prog-id=193 op=LOAD Dec 16 12:58:28.514000 audit[4577]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc49900230 a2=94 a3=54428f items=0 ppid=4456 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.514000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:28.514000 audit: BPF prog-id=193 op=UNLOAD Dec 16 12:58:28.514000 audit[4577]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc49900230 a2=94 a3=54428f items=0 ppid=4456 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.514000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:28.514000 audit: BPF prog-id=194 op=LOAD Dec 16 12:58:28.514000 audit[4577]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc49900260 a2=94 a3=2 items=0 ppid=4456 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.514000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:28.514000 audit: BPF prog-id=194 op=UNLOAD Dec 16 12:58:28.514000 audit[4577]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc49900260 a2=0 a3=2 items=0 ppid=4456 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.514000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:28.748830 containerd[1627]: time="2025-12-16T12:58:28.748334461Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:58:28.751154 containerd[1627]: time="2025-12-16T12:58:28.751040203Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:58:28.751154 containerd[1627]: time="2025-12-16T12:58:28.751146223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:58:28.751905 kubelet[2968]: E1216 12:58:28.751568 2968 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:58:28.751905 kubelet[2968]: E1216 12:58:28.751831 2968 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:58:28.752412 kubelet[2968]: E1216 12:58:28.752355 2968 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jrd7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-frm2n_calico-system(ae2d98cb-e462-4622-a2ef-d1063c3df86a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:58:28.752873 containerd[1627]: time="2025-12-16T12:58:28.752841903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:58:28.754475 kubelet[2968]: E1216 12:58:28.754388 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-frm2n" podUID="ae2d98cb-e462-4622-a2ef-d1063c3df86a" Dec 16 12:58:28.758000 audit: BPF prog-id=195 op=LOAD Dec 16 12:58:28.758000 audit[4577]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc49900120 a2=94 a3=1 items=0 ppid=4456 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.758000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:28.758000 audit: BPF prog-id=195 op=UNLOAD Dec 16 12:58:28.758000 audit[4577]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc49900120 a2=94 a3=1 items=0 ppid=4456 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.758000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:28.773000 audit: BPF prog-id=196 op=LOAD Dec 16 12:58:28.773000 audit[4577]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc49900110 a2=94 a3=4 items=0 ppid=4456 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.773000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:28.773000 audit: BPF prog-id=196 op=UNLOAD Dec 16 12:58:28.773000 audit[4577]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc49900110 a2=0 a3=4 items=0 ppid=4456 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.773000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:28.774000 audit: BPF prog-id=197 op=LOAD Dec 16 12:58:28.774000 audit[4577]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc498fff70 a2=94 a3=5 items=0 ppid=4456 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.774000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:28.774000 audit: BPF prog-id=197 op=UNLOAD Dec 16 12:58:28.774000 audit[4577]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc498fff70 a2=0 a3=5 items=0 ppid=4456 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.774000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:28.774000 audit: BPF prog-id=198 op=LOAD Dec 16 12:58:28.774000 audit[4577]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc49900190 a2=94 a3=6 items=0 ppid=4456 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.774000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:28.774000 audit: BPF prog-id=198 op=UNLOAD Dec 16 12:58:28.774000 audit[4577]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc49900190 a2=0 a3=6 items=0 ppid=4456 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.774000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:28.774000 audit: BPF prog-id=199 op=LOAD Dec 16 12:58:28.774000 audit[4577]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc498ff940 a2=94 a3=88 items=0 ppid=4456 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.774000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:28.775000 audit: BPF prog-id=200 op=LOAD Dec 16 12:58:28.775000 audit[4577]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffc498ff7c0 a2=94 a3=2 items=0 ppid=4456 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.775000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:28.775000 audit: BPF prog-id=200 op=UNLOAD Dec 16 12:58:28.775000 audit[4577]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffc498ff7f0 a2=0 a3=7ffc498ff8f0 items=0 ppid=4456 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.775000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:28.775000 audit: BPF prog-id=199 op=UNLOAD Dec 16 12:58:28.775000 audit[4577]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=16bafd10 a2=0 a3=909f5239823dc7db items=0 ppid=4456 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.775000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:28.794000 audit: BPF prog-id=201 op=LOAD Dec 16 12:58:28.794000 audit[4582]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd7fa85930 a2=98 a3=1999999999999999 items=0 ppid=4456 pid=4582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.794000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:58:28.795000 audit: BPF prog-id=201 op=UNLOAD Dec 16 12:58:28.795000 audit[4582]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd7fa85900 a3=0 items=0 ppid=4456 pid=4582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.795000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:58:28.796000 audit: BPF prog-id=202 op=LOAD Dec 16 12:58:28.796000 audit[4582]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd7fa85810 a2=94 a3=ffff items=0 ppid=4456 pid=4582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.796000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:58:28.796000 audit: BPF prog-id=202 op=UNLOAD Dec 16 12:58:28.796000 audit[4582]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd7fa85810 a2=94 a3=ffff items=0 ppid=4456 pid=4582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.796000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:58:28.796000 audit: BPF prog-id=203 op=LOAD Dec 16 12:58:28.796000 audit[4582]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd7fa85850 a2=94 a3=7ffd7fa85a30 items=0 ppid=4456 pid=4582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.796000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:58:28.797000 audit: BPF prog-id=203 op=UNLOAD Dec 16 12:58:28.797000 audit[4582]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd7fa85850 a2=94 a3=7ffd7fa85a30 items=0 ppid=4456 pid=4582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.797000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:58:28.906295 systemd-networkd[1550]: vxlan.calico: Link UP Dec 16 12:58:28.906311 systemd-networkd[1550]: vxlan.calico: Gained carrier Dec 16 12:58:28.944000 audit: BPF prog-id=204 op=LOAD Dec 16 12:58:28.944000 audit[4607]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd5ea48940 a2=98 a3=20 items=0 ppid=4456 pid=4607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.944000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:58:28.944000 audit: BPF prog-id=204 op=UNLOAD Dec 16 12:58:28.944000 audit[4607]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd5ea48910 a3=0 items=0 ppid=4456 pid=4607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.944000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:58:28.945000 audit: BPF prog-id=205 op=LOAD Dec 16 12:58:28.945000 audit[4607]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd5ea48750 a2=94 a3=54428f items=0 ppid=4456 pid=4607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.945000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:58:28.945000 audit: BPF prog-id=205 op=UNLOAD Dec 16 12:58:28.945000 audit[4607]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd5ea48750 a2=94 a3=54428f items=0 ppid=4456 pid=4607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.945000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:58:28.945000 audit: BPF prog-id=206 op=LOAD Dec 16 12:58:28.945000 audit[4607]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd5ea48780 a2=94 a3=2 items=0 ppid=4456 pid=4607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.945000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:58:28.945000 audit: BPF prog-id=206 op=UNLOAD Dec 16 12:58:28.945000 audit[4607]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd5ea48780 a2=0 a3=2 items=0 ppid=4456 pid=4607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.945000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:58:28.945000 audit: BPF prog-id=207 op=LOAD Dec 16 12:58:28.945000 audit[4607]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd5ea48530 a2=94 a3=4 items=0 ppid=4456 pid=4607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.945000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:58:28.945000 audit: BPF prog-id=207 op=UNLOAD Dec 16 12:58:28.945000 audit[4607]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd5ea48530 a2=94 a3=4 items=0 ppid=4456 pid=4607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.945000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:58:28.945000 audit: BPF prog-id=208 op=LOAD Dec 16 12:58:28.945000 audit[4607]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd5ea48630 a2=94 a3=7ffd5ea487b0 items=0 ppid=4456 pid=4607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.945000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:58:28.945000 audit: BPF prog-id=208 op=UNLOAD Dec 16 12:58:28.945000 audit[4607]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd5ea48630 a2=0 a3=7ffd5ea487b0 items=0 ppid=4456 pid=4607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.945000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:58:28.947000 audit: BPF prog-id=209 op=LOAD Dec 16 12:58:28.947000 audit[4607]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd5ea47d60 a2=94 a3=2 items=0 ppid=4456 pid=4607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.947000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:58:28.947000 audit: BPF prog-id=209 op=UNLOAD Dec 16 12:58:28.947000 audit[4607]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd5ea47d60 a2=0 a3=2 items=0 ppid=4456 pid=4607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.947000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:58:28.947000 audit: BPF prog-id=210 op=LOAD Dec 16 12:58:28.947000 audit[4607]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd5ea47e60 a2=94 a3=30 items=0 ppid=4456 pid=4607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.947000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:58:28.955000 audit: BPF prog-id=211 op=LOAD Dec 16 12:58:28.955000 audit[4611]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdbbd5c4c0 a2=98 a3=0 items=0 ppid=4456 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.955000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:28.955000 audit: BPF prog-id=211 op=UNLOAD Dec 16 12:58:28.955000 audit[4611]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffdbbd5c490 a3=0 items=0 ppid=4456 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.955000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:28.956000 audit: BPF prog-id=212 op=LOAD Dec 16 12:58:28.956000 audit[4611]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdbbd5c2b0 a2=94 a3=54428f items=0 ppid=4456 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.956000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:28.958000 audit: BPF prog-id=212 op=UNLOAD Dec 16 12:58:28.958000 audit[4611]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffdbbd5c2b0 a2=94 a3=54428f items=0 ppid=4456 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.958000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:28.958000 audit: BPF prog-id=213 op=LOAD Dec 16 12:58:28.958000 audit[4611]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdbbd5c2e0 a2=94 a3=2 items=0 ppid=4456 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.958000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:28.958000 audit: BPF prog-id=213 op=UNLOAD Dec 16 12:58:28.958000 audit[4611]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffdbbd5c2e0 a2=0 a3=2 items=0 ppid=4456 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:28.958000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:28.972523 systemd-networkd[1550]: calid0a706e2d6a: Gained IPv6LL Dec 16 12:58:29.080970 containerd[1627]: time="2025-12-16T12:58:29.080897395Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:58:29.082351 containerd[1627]: time="2025-12-16T12:58:29.082247734Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:58:29.082351 containerd[1627]: time="2025-12-16T12:58:29.082303314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:58:29.084092 kubelet[2968]: E1216 12:58:29.082677 2968 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:58:29.084092 kubelet[2968]: E1216 12:58:29.082813 2968 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:58:29.084499 kubelet[2968]: E1216 12:58:29.084341 2968 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lcxrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7bfdd67657-r2jq9_calico-system(fc47de84-c877-49ec-9b26-5c23204d879d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:58:29.085568 kubelet[2968]: E1216 12:58:29.085521 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7bfdd67657-r2jq9" podUID="fc47de84-c877-49ec-9b26-5c23204d879d" Dec 16 12:58:29.231000 audit: BPF prog-id=214 op=LOAD Dec 16 12:58:29.231000 audit[4611]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdbbd5c1a0 a2=94 a3=1 items=0 ppid=4456 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:29.231000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:29.231000 audit: BPF prog-id=214 op=UNLOAD Dec 16 12:58:29.231000 audit[4611]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffdbbd5c1a0 a2=94 a3=1 items=0 ppid=4456 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:29.231000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:29.246000 audit: BPF prog-id=215 op=LOAD Dec 16 12:58:29.246000 audit[4611]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffdbbd5c190 a2=94 a3=4 items=0 ppid=4456 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:29.246000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:29.246000 audit: BPF prog-id=215 op=UNLOAD Dec 16 12:58:29.246000 audit[4611]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffdbbd5c190 a2=0 a3=4 items=0 ppid=4456 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:29.246000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:29.247000 audit: BPF prog-id=216 op=LOAD Dec 16 12:58:29.247000 audit[4611]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdbbd5bff0 a2=94 a3=5 items=0 ppid=4456 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:29.247000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:29.247000 audit: BPF prog-id=216 op=UNLOAD Dec 16 12:58:29.247000 audit[4611]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffdbbd5bff0 a2=0 a3=5 items=0 ppid=4456 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:29.247000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:29.248000 audit: BPF prog-id=217 op=LOAD Dec 16 12:58:29.248000 audit[4611]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffdbbd5c210 a2=94 a3=6 items=0 ppid=4456 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:29.248000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:29.248000 audit: BPF prog-id=217 op=UNLOAD Dec 16 12:58:29.248000 audit[4611]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffdbbd5c210 a2=0 a3=6 items=0 ppid=4456 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:29.248000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:29.249000 audit: BPF prog-id=218 op=LOAD Dec 16 12:58:29.249000 audit[4611]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffdbbd5b9c0 a2=94 a3=88 items=0 ppid=4456 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:29.249000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:29.249000 audit: BPF prog-id=219 op=LOAD Dec 16 12:58:29.249000 audit[4611]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffdbbd5b840 a2=94 a3=2 items=0 ppid=4456 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:29.249000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:29.249000 audit: BPF prog-id=219 op=UNLOAD Dec 16 12:58:29.249000 audit[4611]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffdbbd5b870 a2=0 a3=7ffdbbd5b970 items=0 ppid=4456 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:29.249000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:29.250000 audit: BPF prog-id=218 op=UNLOAD Dec 16 12:58:29.250000 audit[4611]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=40168d10 a2=0 a3=8de686bd414487c0 items=0 ppid=4456 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:29.250000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:29.260000 audit: BPF prog-id=210 op=UNLOAD Dec 16 12:58:29.260000 audit[4456]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c001002680 a2=0 a3=0 items=0 ppid=4414 pid=4456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:29.260000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 12:58:29.356236 systemd-networkd[1550]: cali2439840b6b3: Gained IPv6LL Dec 16 12:58:29.369000 audit[4643]: NETFILTER_CFG table=nat:121 family=2 entries=15 op=nft_register_chain pid=4643 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:58:29.369000 audit[4643]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffffba34c50 a2=0 a3=7ffffba34c3c items=0 ppid=4456 pid=4643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:29.369000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:58:29.375000 audit[4644]: NETFILTER_CFG table=raw:122 family=2 entries=21 op=nft_register_chain pid=4644 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:58:29.375000 audit[4644]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffc0cbd68c0 a2=0 a3=7ffc0cbd68ac items=0 ppid=4456 pid=4644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:29.375000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:58:29.383000 audit[4646]: NETFILTER_CFG table=mangle:123 family=2 entries=16 op=nft_register_chain pid=4646 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:58:29.383000 audit[4646]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffddcc4d070 a2=0 a3=7ffddcc4d05c items=0 ppid=4456 pid=4646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:29.383000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:58:29.395000 audit[4648]: NETFILTER_CFG table=filter:124 family=2 entries=122 op=nft_register_chain pid=4648 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:58:29.395000 audit[4648]: SYSCALL arch=c000003e syscall=46 success=yes exit=69792 a0=3 a1=7ffe112ab770 a2=0 a3=7ffe112ab75c items=0 ppid=4456 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:29.395000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:58:29.493073 kubelet[2968]: E1216 12:58:29.492724 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7bfdd67657-r2jq9" podUID="fc47de84-c877-49ec-9b26-5c23204d879d" Dec 16 12:58:29.505269 kubelet[2968]: E1216 12:58:29.505134 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-frm2n" podUID="ae2d98cb-e462-4622-a2ef-d1063c3df86a" Dec 16 12:58:29.632000 audit[4662]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=4662 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:29.632000 audit[4662]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc603590c0 a2=0 a3=7ffc603590ac items=0 ppid=3129 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:29.632000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:29.639000 audit[4662]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=4662 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:29.639000 audit[4662]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc603590c0 a2=0 a3=0 items=0 ppid=3129 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:29.639000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:30.060216 systemd-networkd[1550]: vxlan.calico: Gained IPv6LL Dec 16 12:58:34.730462 containerd[1627]: time="2025-12-16T12:58:34.730390400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fb664895f-z7td5,Uid:654be68d-8474-4290-8738-6c95ee33b1c3,Namespace:calico-system,Attempt:0,}" Dec 16 12:58:34.906643 systemd-networkd[1550]: calic7d99fbb6fa: Link UP Dec 16 12:58:34.908730 systemd-networkd[1550]: calic7d99fbb6fa: Gained carrier Dec 16 12:58:34.935165 containerd[1627]: 2025-12-16 12:58:34.799 [INFO][4675] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gtzk5.gb1.brightbox.com-k8s-calico--kube--controllers--7fb664895f--z7td5-eth0 calico-kube-controllers-7fb664895f- calico-system 654be68d-8474-4290-8738-6c95ee33b1c3 879 0 2025-12-16 12:57:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7fb664895f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-gtzk5.gb1.brightbox.com calico-kube-controllers-7fb664895f-z7td5 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic7d99fbb6fa [] [] }} ContainerID="da6f44acd60b0f93d11cdd984e6271d9749b29be0291262c663e8a34f1991b22" Namespace="calico-system" Pod="calico-kube-controllers-7fb664895f-z7td5" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-calico--kube--controllers--7fb664895f--z7td5-" Dec 16 12:58:34.935165 containerd[1627]: 2025-12-16 12:58:34.799 [INFO][4675] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="da6f44acd60b0f93d11cdd984e6271d9749b29be0291262c663e8a34f1991b22" Namespace="calico-system" Pod="calico-kube-controllers-7fb664895f-z7td5" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-calico--kube--controllers--7fb664895f--z7td5-eth0" Dec 16 12:58:34.935165 containerd[1627]: 2025-12-16 12:58:34.843 [INFO][4686] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da6f44acd60b0f93d11cdd984e6271d9749b29be0291262c663e8a34f1991b22" HandleID="k8s-pod-network.da6f44acd60b0f93d11cdd984e6271d9749b29be0291262c663e8a34f1991b22" Workload="srv--gtzk5.gb1.brightbox.com-k8s-calico--kube--controllers--7fb664895f--z7td5-eth0" Dec 16 12:58:34.935523 containerd[1627]: 2025-12-16 12:58:34.843 [INFO][4686] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="da6f44acd60b0f93d11cdd984e6271d9749b29be0291262c663e8a34f1991b22" HandleID="k8s-pod-network.da6f44acd60b0f93d11cdd984e6271d9749b29be0291262c663e8a34f1991b22" Workload="srv--gtzk5.gb1.brightbox.com-k8s-calico--kube--controllers--7fb664895f--z7td5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-gtzk5.gb1.brightbox.com", "pod":"calico-kube-controllers-7fb664895f-z7td5", "timestamp":"2025-12-16 12:58:34.843505554 +0000 UTC"}, Hostname:"srv-gtzk5.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:58:34.935523 containerd[1627]: 2025-12-16 12:58:34.844 [INFO][4686] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:58:34.935523 containerd[1627]: 2025-12-16 12:58:34.844 [INFO][4686] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:58:34.935523 containerd[1627]: 2025-12-16 12:58:34.844 [INFO][4686] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gtzk5.gb1.brightbox.com' Dec 16 12:58:34.935523 containerd[1627]: 2025-12-16 12:58:34.856 [INFO][4686] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.da6f44acd60b0f93d11cdd984e6271d9749b29be0291262c663e8a34f1991b22" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:34.935523 containerd[1627]: 2025-12-16 12:58:34.863 [INFO][4686] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:34.935523 containerd[1627]: 2025-12-16 12:58:34.872 [INFO][4686] ipam/ipam.go 511: Trying affinity for 192.168.21.64/26 host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:34.935523 containerd[1627]: 2025-12-16 12:58:34.875 [INFO][4686] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.64/26 host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:34.935523 containerd[1627]: 2025-12-16 12:58:34.879 [INFO][4686] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.64/26 host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:34.938655 containerd[1627]: 2025-12-16 12:58:34.879 [INFO][4686] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.21.64/26 handle="k8s-pod-network.da6f44acd60b0f93d11cdd984e6271d9749b29be0291262c663e8a34f1991b22" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:34.938655 containerd[1627]: 2025-12-16 12:58:34.882 [INFO][4686] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.da6f44acd60b0f93d11cdd984e6271d9749b29be0291262c663e8a34f1991b22 Dec 16 12:58:34.938655 containerd[1627]: 2025-12-16 12:58:34.889 [INFO][4686] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.21.64/26 handle="k8s-pod-network.da6f44acd60b0f93d11cdd984e6271d9749b29be0291262c663e8a34f1991b22" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:34.938655 containerd[1627]: 2025-12-16 12:58:34.896 [INFO][4686] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.21.67/26] block=192.168.21.64/26 handle="k8s-pod-network.da6f44acd60b0f93d11cdd984e6271d9749b29be0291262c663e8a34f1991b22" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:34.938655 containerd[1627]: 2025-12-16 12:58:34.896 [INFO][4686] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.67/26] handle="k8s-pod-network.da6f44acd60b0f93d11cdd984e6271d9749b29be0291262c663e8a34f1991b22" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:34.938655 containerd[1627]: 2025-12-16 12:58:34.896 [INFO][4686] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:58:34.938655 containerd[1627]: 2025-12-16 12:58:34.897 [INFO][4686] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.21.67/26] IPv6=[] ContainerID="da6f44acd60b0f93d11cdd984e6271d9749b29be0291262c663e8a34f1991b22" HandleID="k8s-pod-network.da6f44acd60b0f93d11cdd984e6271d9749b29be0291262c663e8a34f1991b22" Workload="srv--gtzk5.gb1.brightbox.com-k8s-calico--kube--controllers--7fb664895f--z7td5-eth0" Dec 16 12:58:34.939650 containerd[1627]: 2025-12-16 12:58:34.901 [INFO][4675] cni-plugin/k8s.go 418: Populated endpoint ContainerID="da6f44acd60b0f93d11cdd984e6271d9749b29be0291262c663e8a34f1991b22" Namespace="calico-system" Pod="calico-kube-controllers-7fb664895f-z7td5" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-calico--kube--controllers--7fb664895f--z7td5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gtzk5.gb1.brightbox.com-k8s-calico--kube--controllers--7fb664895f--z7td5-eth0", GenerateName:"calico-kube-controllers-7fb664895f-", Namespace:"calico-system", SelfLink:"", UID:"654be68d-8474-4290-8738-6c95ee33b1c3", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 57, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fb664895f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gtzk5.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-7fb664895f-z7td5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.21.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic7d99fbb6fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:58:34.939781 containerd[1627]: 2025-12-16 12:58:34.901 [INFO][4675] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.67/32] ContainerID="da6f44acd60b0f93d11cdd984e6271d9749b29be0291262c663e8a34f1991b22" Namespace="calico-system" Pod="calico-kube-controllers-7fb664895f-z7td5" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-calico--kube--controllers--7fb664895f--z7td5-eth0" Dec 16 12:58:34.939781 containerd[1627]: 2025-12-16 12:58:34.901 [INFO][4675] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic7d99fbb6fa ContainerID="da6f44acd60b0f93d11cdd984e6271d9749b29be0291262c663e8a34f1991b22" Namespace="calico-system" Pod="calico-kube-controllers-7fb664895f-z7td5" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-calico--kube--controllers--7fb664895f--z7td5-eth0" Dec 16 12:58:34.939781 containerd[1627]: 2025-12-16 12:58:34.910 [INFO][4675] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da6f44acd60b0f93d11cdd984e6271d9749b29be0291262c663e8a34f1991b22" Namespace="calico-system" Pod="calico-kube-controllers-7fb664895f-z7td5" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-calico--kube--controllers--7fb664895f--z7td5-eth0" Dec 16 12:58:34.939927 containerd[1627]: 2025-12-16 12:58:34.911 [INFO][4675] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="da6f44acd60b0f93d11cdd984e6271d9749b29be0291262c663e8a34f1991b22" Namespace="calico-system" Pod="calico-kube-controllers-7fb664895f-z7td5" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-calico--kube--controllers--7fb664895f--z7td5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gtzk5.gb1.brightbox.com-k8s-calico--kube--controllers--7fb664895f--z7td5-eth0", GenerateName:"calico-kube-controllers-7fb664895f-", Namespace:"calico-system", SelfLink:"", UID:"654be68d-8474-4290-8738-6c95ee33b1c3", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 57, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fb664895f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gtzk5.gb1.brightbox.com", ContainerID:"da6f44acd60b0f93d11cdd984e6271d9749b29be0291262c663e8a34f1991b22", Pod:"calico-kube-controllers-7fb664895f-z7td5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.21.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic7d99fbb6fa", MAC:"e6:b9:52:e9:f8:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:58:34.940060 containerd[1627]: 2025-12-16 12:58:34.928 [INFO][4675] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="da6f44acd60b0f93d11cdd984e6271d9749b29be0291262c663e8a34f1991b22" Namespace="calico-system" Pod="calico-kube-controllers-7fb664895f-z7td5" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-calico--kube--controllers--7fb664895f--z7td5-eth0" Dec 16 12:58:34.960000 audit[4702]: NETFILTER_CFG table=filter:127 family=2 entries=40 op=nft_register_chain pid=4702 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:58:34.972036 kernel: kauditd_printk_skb: 253 callbacks suppressed Dec 16 12:58:34.972225 kernel: audit: type=1325 audit(1765889914.960:666): table=filter:127 family=2 entries=40 op=nft_register_chain pid=4702 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:58:34.960000 audit[4702]: SYSCALL arch=c000003e syscall=46 success=yes exit=20764 a0=3 a1=7ffcfbe47cb0 a2=0 a3=7ffcfbe47c9c items=0 ppid=4456 pid=4702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:34.980325 kernel: audit: type=1300 audit(1765889914.960:666): arch=c000003e syscall=46 success=yes exit=20764 a0=3 a1=7ffcfbe47cb0 a2=0 a3=7ffcfbe47c9c items=0 ppid=4456 pid=4702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:34.960000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:58:34.990031 kernel: audit: type=1327 audit(1765889914.960:666): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:58:35.008359 containerd[1627]: time="2025-12-16T12:58:35.008295359Z" level=info msg="connecting to shim da6f44acd60b0f93d11cdd984e6271d9749b29be0291262c663e8a34f1991b22" address="unix:///run/containerd/s/7fc455fbd43b955da44f388239c6051e0bf2edb14fd915d1f60f0a4c981da55d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:58:35.062240 systemd[1]: Started cri-containerd-da6f44acd60b0f93d11cdd984e6271d9749b29be0291262c663e8a34f1991b22.scope - libcontainer container da6f44acd60b0f93d11cdd984e6271d9749b29be0291262c663e8a34f1991b22. Dec 16 12:58:35.084000 audit: BPF prog-id=220 op=LOAD Dec 16 12:58:35.086000 audit: BPF prog-id=221 op=LOAD Dec 16 12:58:35.088069 kernel: audit: type=1334 audit(1765889915.084:667): prog-id=220 op=LOAD Dec 16 12:58:35.088142 kernel: audit: type=1334 audit(1765889915.086:668): prog-id=221 op=LOAD Dec 16 12:58:35.086000 audit[4724]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4712 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:35.095026 kernel: audit: type=1300 audit(1765889915.086:668): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4712 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:35.095145 kernel: audit: type=1327 audit(1765889915.086:668): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461366634346163643630623066393364313163646439383465363237 Dec 16 12:58:35.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461366634346163643630623066393364313163646439383465363237 Dec 16 12:58:35.086000 audit: BPF prog-id=221 op=UNLOAD Dec 16 12:58:35.101337 kernel: audit: type=1334 audit(1765889915.086:669): prog-id=221 op=UNLOAD Dec 16 12:58:35.086000 audit[4724]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4712 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:35.104142 kernel: audit: type=1300 audit(1765889915.086:669): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4712 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:35.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461366634346163643630623066393364313163646439383465363237 Dec 16 12:58:35.109491 kernel: audit: type=1327 audit(1765889915.086:669): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461366634346163643630623066393364313163646439383465363237 Dec 16 12:58:35.086000 audit: BPF prog-id=222 op=LOAD Dec 16 12:58:35.086000 audit[4724]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4712 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:35.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461366634346163643630623066393364313163646439383465363237 Dec 16 12:58:35.086000 audit: BPF prog-id=223 op=LOAD Dec 16 12:58:35.086000 audit[4724]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4712 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:35.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461366634346163643630623066393364313163646439383465363237 Dec 16 12:58:35.086000 audit: BPF prog-id=223 op=UNLOAD Dec 16 12:58:35.086000 audit[4724]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4712 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:35.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461366634346163643630623066393364313163646439383465363237 Dec 16 12:58:35.087000 audit: BPF prog-id=222 op=UNLOAD Dec 16 12:58:35.087000 audit[4724]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4712 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:35.087000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461366634346163643630623066393364313163646439383465363237 Dec 16 12:58:35.087000 audit: BPF prog-id=224 op=LOAD Dec 16 12:58:35.087000 audit[4724]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4712 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:35.087000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461366634346163643630623066393364313163646439383465363237 Dec 16 12:58:35.177393 containerd[1627]: time="2025-12-16T12:58:35.177243025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fb664895f-z7td5,Uid:654be68d-8474-4290-8738-6c95ee33b1c3,Namespace:calico-system,Attempt:0,} returns sandbox id \"da6f44acd60b0f93d11cdd984e6271d9749b29be0291262c663e8a34f1991b22\"" Dec 16 12:58:35.180622 containerd[1627]: time="2025-12-16T12:58:35.180327464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:58:35.486543 containerd[1627]: time="2025-12-16T12:58:35.486472911Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:58:35.493097 containerd[1627]: time="2025-12-16T12:58:35.492990098Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:58:35.493396 containerd[1627]: time="2025-12-16T12:58:35.493005394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:58:35.493496 kubelet[2968]: E1216 12:58:35.493423 2968 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:58:35.494118 kubelet[2968]: E1216 12:58:35.493512 2968 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:58:35.494118 kubelet[2968]: E1216 12:58:35.493780 2968 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9zzkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7fb664895f-z7td5_calico-system(654be68d-8474-4290-8738-6c95ee33b1c3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:58:35.494984 kubelet[2968]: E1216 12:58:35.494902 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fb664895f-z7td5" podUID="654be68d-8474-4290-8738-6c95ee33b1c3" Dec 16 12:58:35.502332 kubelet[2968]: E1216 12:58:35.502208 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fb664895f-z7td5" podUID="654be68d-8474-4290-8738-6c95ee33b1c3" Dec 16 12:58:35.730265 containerd[1627]: time="2025-12-16T12:58:35.730166187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5rnz2,Uid:800738c6-a41d-474f-b72d-35aa420d6fcf,Namespace:kube-system,Attempt:0,}" Dec 16 12:58:35.901485 systemd-networkd[1550]: cali7b9c97c3863: Link UP Dec 16 12:58:35.902416 systemd-networkd[1550]: cali7b9c97c3863: Gained carrier Dec 16 12:58:35.932975 containerd[1627]: 2025-12-16 12:58:35.796 [INFO][4751] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gtzk5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--5rnz2-eth0 coredns-674b8bbfcf- kube-system 800738c6-a41d-474f-b72d-35aa420d6fcf 869 0 2025-12-16 12:57:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-gtzk5.gb1.brightbox.com coredns-674b8bbfcf-5rnz2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7b9c97c3863 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f0e6664fd43b069d502043b1fab24ff6658947ba5b09de26c226e8ab34566f6c" Namespace="kube-system" Pod="coredns-674b8bbfcf-5rnz2" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--5rnz2-" Dec 16 12:58:35.932975 containerd[1627]: 2025-12-16 12:58:35.796 [INFO][4751] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f0e6664fd43b069d502043b1fab24ff6658947ba5b09de26c226e8ab34566f6c" Namespace="kube-system" Pod="coredns-674b8bbfcf-5rnz2" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--5rnz2-eth0" Dec 16 12:58:35.932975 containerd[1627]: 2025-12-16 12:58:35.838 [INFO][4762] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f0e6664fd43b069d502043b1fab24ff6658947ba5b09de26c226e8ab34566f6c" HandleID="k8s-pod-network.f0e6664fd43b069d502043b1fab24ff6658947ba5b09de26c226e8ab34566f6c" Workload="srv--gtzk5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--5rnz2-eth0" Dec 16 12:58:35.935040 containerd[1627]: 2025-12-16 12:58:35.839 [INFO][4762] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f0e6664fd43b069d502043b1fab24ff6658947ba5b09de26c226e8ab34566f6c" HandleID="k8s-pod-network.f0e6664fd43b069d502043b1fab24ff6658947ba5b09de26c226e8ab34566f6c" Workload="srv--gtzk5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--5rnz2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024ef50), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-gtzk5.gb1.brightbox.com", "pod":"coredns-674b8bbfcf-5rnz2", "timestamp":"2025-12-16 12:58:35.838702535 +0000 UTC"}, Hostname:"srv-gtzk5.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:58:35.935040 containerd[1627]: 2025-12-16 12:58:35.839 [INFO][4762] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:58:35.935040 containerd[1627]: 2025-12-16 12:58:35.839 [INFO][4762] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:58:35.935040 containerd[1627]: 2025-12-16 12:58:35.839 [INFO][4762] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gtzk5.gb1.brightbox.com' Dec 16 12:58:35.935040 containerd[1627]: 2025-12-16 12:58:35.851 [INFO][4762] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f0e6664fd43b069d502043b1fab24ff6658947ba5b09de26c226e8ab34566f6c" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:35.935040 containerd[1627]: 2025-12-16 12:58:35.859 [INFO][4762] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:35.935040 containerd[1627]: 2025-12-16 12:58:35.868 [INFO][4762] ipam/ipam.go 511: Trying affinity for 192.168.21.64/26 host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:35.935040 containerd[1627]: 2025-12-16 12:58:35.871 [INFO][4762] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.64/26 host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:35.935040 containerd[1627]: 2025-12-16 12:58:35.876 [INFO][4762] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.64/26 host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:35.935499 containerd[1627]: 2025-12-16 12:58:35.876 [INFO][4762] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.21.64/26 handle="k8s-pod-network.f0e6664fd43b069d502043b1fab24ff6658947ba5b09de26c226e8ab34566f6c" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:35.935499 containerd[1627]: 2025-12-16 12:58:35.878 [INFO][4762] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f0e6664fd43b069d502043b1fab24ff6658947ba5b09de26c226e8ab34566f6c Dec 16 12:58:35.935499 containerd[1627]: 2025-12-16 12:58:35.884 [INFO][4762] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.21.64/26 handle="k8s-pod-network.f0e6664fd43b069d502043b1fab24ff6658947ba5b09de26c226e8ab34566f6c" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:35.935499 containerd[1627]: 2025-12-16 12:58:35.894 [INFO][4762] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.21.68/26] block=192.168.21.64/26 handle="k8s-pod-network.f0e6664fd43b069d502043b1fab24ff6658947ba5b09de26c226e8ab34566f6c" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:35.935499 containerd[1627]: 2025-12-16 12:58:35.894 [INFO][4762] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.68/26] handle="k8s-pod-network.f0e6664fd43b069d502043b1fab24ff6658947ba5b09de26c226e8ab34566f6c" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:35.935499 containerd[1627]: 2025-12-16 12:58:35.894 [INFO][4762] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:58:35.935499 containerd[1627]: 2025-12-16 12:58:35.895 [INFO][4762] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.21.68/26] IPv6=[] ContainerID="f0e6664fd43b069d502043b1fab24ff6658947ba5b09de26c226e8ab34566f6c" HandleID="k8s-pod-network.f0e6664fd43b069d502043b1fab24ff6658947ba5b09de26c226e8ab34566f6c" Workload="srv--gtzk5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--5rnz2-eth0" Dec 16 12:58:35.935833 containerd[1627]: 2025-12-16 12:58:35.897 [INFO][4751] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f0e6664fd43b069d502043b1fab24ff6658947ba5b09de26c226e8ab34566f6c" Namespace="kube-system" Pod="coredns-674b8bbfcf-5rnz2" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--5rnz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gtzk5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--5rnz2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"800738c6-a41d-474f-b72d-35aa420d6fcf", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 57, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gtzk5.gb1.brightbox.com", ContainerID:"", Pod:"coredns-674b8bbfcf-5rnz2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b9c97c3863", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:58:35.935833 containerd[1627]: 2025-12-16 12:58:35.898 [INFO][4751] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.68/32] ContainerID="f0e6664fd43b069d502043b1fab24ff6658947ba5b09de26c226e8ab34566f6c" Namespace="kube-system" Pod="coredns-674b8bbfcf-5rnz2" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--5rnz2-eth0" Dec 16 12:58:35.935833 containerd[1627]: 2025-12-16 12:58:35.898 [INFO][4751] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b9c97c3863 ContainerID="f0e6664fd43b069d502043b1fab24ff6658947ba5b09de26c226e8ab34566f6c" Namespace="kube-system" Pod="coredns-674b8bbfcf-5rnz2" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--5rnz2-eth0" Dec 16 12:58:35.935833 containerd[1627]: 2025-12-16 12:58:35.903 [INFO][4751] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f0e6664fd43b069d502043b1fab24ff6658947ba5b09de26c226e8ab34566f6c" Namespace="kube-system" Pod="coredns-674b8bbfcf-5rnz2" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--5rnz2-eth0" Dec 16 12:58:35.935833 containerd[1627]: 2025-12-16 12:58:35.904 [INFO][4751] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f0e6664fd43b069d502043b1fab24ff6658947ba5b09de26c226e8ab34566f6c" Namespace="kube-system" Pod="coredns-674b8bbfcf-5rnz2" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--5rnz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gtzk5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--5rnz2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"800738c6-a41d-474f-b72d-35aa420d6fcf", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 57, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gtzk5.gb1.brightbox.com", ContainerID:"f0e6664fd43b069d502043b1fab24ff6658947ba5b09de26c226e8ab34566f6c", Pod:"coredns-674b8bbfcf-5rnz2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b9c97c3863", MAC:"b2:1a:d7:70:fb:17", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:58:35.935833 containerd[1627]: 2025-12-16 12:58:35.925 [INFO][4751] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f0e6664fd43b069d502043b1fab24ff6658947ba5b09de26c226e8ab34566f6c" Namespace="kube-system" Pod="coredns-674b8bbfcf-5rnz2" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--5rnz2-eth0" Dec 16 12:58:35.984255 containerd[1627]: time="2025-12-16T12:58:35.983835098Z" level=info msg="connecting to shim f0e6664fd43b069d502043b1fab24ff6658947ba5b09de26c226e8ab34566f6c" address="unix:///run/containerd/s/98909ef79434f6ccd487a246e83f802b80327d16e522c9bb38cd9890bdcc41ba" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:58:35.999000 audit[4793]: NETFILTER_CFG table=filter:128 family=2 entries=50 op=nft_register_chain pid=4793 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:58:35.999000 audit[4793]: SYSCALL arch=c000003e syscall=46 success=yes exit=24928 a0=3 a1=7ffe0fd257b0 a2=0 a3=7ffe0fd2579c items=0 ppid=4456 pid=4793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:35.999000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:58:36.034545 systemd[1]: Started cri-containerd-f0e6664fd43b069d502043b1fab24ff6658947ba5b09de26c226e8ab34566f6c.scope - libcontainer container f0e6664fd43b069d502043b1fab24ff6658947ba5b09de26c226e8ab34566f6c. Dec 16 12:58:36.050000 audit: BPF prog-id=225 op=LOAD Dec 16 12:58:36.051000 audit: BPF prog-id=226 op=LOAD Dec 16 12:58:36.051000 audit[4800]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4788 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:36.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630653636363466643433623036396435303230343362316661623234 Dec 16 12:58:36.051000 audit: BPF prog-id=226 op=UNLOAD Dec 16 12:58:36.051000 audit[4800]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4788 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:36.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630653636363466643433623036396435303230343362316661623234 Dec 16 12:58:36.051000 audit: BPF prog-id=227 op=LOAD Dec 16 12:58:36.051000 audit[4800]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4788 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:36.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630653636363466643433623036396435303230343362316661623234 Dec 16 12:58:36.052000 audit: BPF prog-id=228 op=LOAD Dec 16 12:58:36.052000 audit[4800]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4788 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:36.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630653636363466643433623036396435303230343362316661623234 Dec 16 12:58:36.052000 audit: BPF prog-id=228 op=UNLOAD Dec 16 12:58:36.052000 audit[4800]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4788 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:36.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630653636363466643433623036396435303230343362316661623234 Dec 16 12:58:36.052000 audit: BPF prog-id=227 op=UNLOAD Dec 16 12:58:36.052000 audit[4800]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4788 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:36.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630653636363466643433623036396435303230343362316661623234 Dec 16 12:58:36.052000 audit: BPF prog-id=229 op=LOAD Dec 16 12:58:36.052000 audit[4800]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4788 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:36.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630653636363466643433623036396435303230343362316661623234 Dec 16 12:58:36.108747 containerd[1627]: time="2025-12-16T12:58:36.108656316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5rnz2,Uid:800738c6-a41d-474f-b72d-35aa420d6fcf,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0e6664fd43b069d502043b1fab24ff6658947ba5b09de26c226e8ab34566f6c\"" Dec 16 12:58:36.119562 containerd[1627]: time="2025-12-16T12:58:36.119485060Z" level=info msg="CreateContainer within sandbox \"f0e6664fd43b069d502043b1fab24ff6658947ba5b09de26c226e8ab34566f6c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:58:36.137493 containerd[1627]: time="2025-12-16T12:58:36.137445558Z" level=info msg="Container fbb2f8c832b6f5630d65d736ce2ab464b78215a43824d5db48ba3c9f34af44ba: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:58:36.143109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3730787003.mount: Deactivated successfully. Dec 16 12:58:36.150272 containerd[1627]: time="2025-12-16T12:58:36.150178361Z" level=info msg="CreateContainer within sandbox \"f0e6664fd43b069d502043b1fab24ff6658947ba5b09de26c226e8ab34566f6c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fbb2f8c832b6f5630d65d736ce2ab464b78215a43824d5db48ba3c9f34af44ba\"" Dec 16 12:58:36.153028 containerd[1627]: time="2025-12-16T12:58:36.151042263Z" level=info msg="StartContainer for \"fbb2f8c832b6f5630d65d736ce2ab464b78215a43824d5db48ba3c9f34af44ba\"" Dec 16 12:58:36.153028 containerd[1627]: time="2025-12-16T12:58:36.152185921Z" level=info msg="connecting to shim fbb2f8c832b6f5630d65d736ce2ab464b78215a43824d5db48ba3c9f34af44ba" address="unix:///run/containerd/s/98909ef79434f6ccd487a246e83f802b80327d16e522c9bb38cd9890bdcc41ba" protocol=ttrpc version=3 Dec 16 12:58:36.186229 systemd[1]: Started cri-containerd-fbb2f8c832b6f5630d65d736ce2ab464b78215a43824d5db48ba3c9f34af44ba.scope - libcontainer container fbb2f8c832b6f5630d65d736ce2ab464b78215a43824d5db48ba3c9f34af44ba. Dec 16 12:58:36.205000 audit: BPF prog-id=230 op=LOAD Dec 16 12:58:36.206000 audit: BPF prog-id=231 op=LOAD Dec 16 12:58:36.206000 audit[4827]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4788 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:36.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662623266386338333262366635363330643635643733366365326162 Dec 16 12:58:36.207000 audit: BPF prog-id=231 op=UNLOAD Dec 16 12:58:36.207000 audit[4827]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4788 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:36.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662623266386338333262366635363330643635643733366365326162 Dec 16 12:58:36.207000 audit: BPF prog-id=232 op=LOAD Dec 16 12:58:36.207000 audit[4827]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4788 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:36.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662623266386338333262366635363330643635643733366365326162 Dec 16 12:58:36.207000 audit: BPF prog-id=233 op=LOAD Dec 16 12:58:36.207000 audit[4827]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4788 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:36.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662623266386338333262366635363330643635643733366365326162 Dec 16 12:58:36.207000 audit: BPF prog-id=233 op=UNLOAD Dec 16 12:58:36.207000 audit[4827]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4788 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:36.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662623266386338333262366635363330643635643733366365326162 Dec 16 12:58:36.207000 audit: BPF prog-id=232 op=UNLOAD Dec 16 12:58:36.207000 audit[4827]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4788 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:36.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662623266386338333262366635363330643635643733366365326162 Dec 16 12:58:36.207000 audit: BPF prog-id=234 op=LOAD Dec 16 12:58:36.207000 audit[4827]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4788 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:36.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662623266386338333262366635363330643635643733366365326162 Dec 16 12:58:36.236283 containerd[1627]: time="2025-12-16T12:58:36.236089390Z" level=info msg="StartContainer for \"fbb2f8c832b6f5630d65d736ce2ab464b78215a43824d5db48ba3c9f34af44ba\" returns successfully" Dec 16 12:58:36.511931 kubelet[2968]: E1216 12:58:36.511722 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fb664895f-z7td5" podUID="654be68d-8474-4290-8738-6c95ee33b1c3" Dec 16 12:58:36.580000 audit[4861]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=4861 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:36.580000 audit[4861]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffe9b5fd90 a2=0 a3=7fffe9b5fd7c items=0 ppid=3129 pid=4861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:36.580000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:36.585000 audit[4861]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=4861 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:36.585000 audit[4861]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffe9b5fd90 a2=0 a3=0 items=0 ppid=3129 pid=4861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:36.585000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:36.730975 containerd[1627]: time="2025-12-16T12:58:36.730751770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6f4459b6-wgcgz,Uid:612befca-b93d-4468-b0c5-1d17cad065aa,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:58:36.733253 containerd[1627]: time="2025-12-16T12:58:36.733152688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hjfvx,Uid:5f28fd1d-daa3-4b1a-9808-93af3076e192,Namespace:calico-system,Attempt:0,}" Dec 16 12:58:36.909215 systemd-networkd[1550]: calic7d99fbb6fa: Gained IPv6LL Dec 16 12:58:36.990062 systemd-networkd[1550]: calic2184d1c288: Link UP Dec 16 12:58:36.993063 systemd-networkd[1550]: calic2184d1c288: Gained carrier Dec 16 12:58:37.031345 containerd[1627]: 2025-12-16 12:58:36.818 [INFO][4863] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gtzk5.gb1.brightbox.com-k8s-calico--apiserver--6c6f4459b6--wgcgz-eth0 calico-apiserver-6c6f4459b6- calico-apiserver 612befca-b93d-4468-b0c5-1d17cad065aa 876 0 2025-12-16 12:57:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c6f4459b6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-gtzk5.gb1.brightbox.com calico-apiserver-6c6f4459b6-wgcgz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic2184d1c288 [] [] }} ContainerID="8b2d247abd7200eecd6ce34c4f554415490bcd476f2cf963ceb210e387d837cb" Namespace="calico-apiserver" Pod="calico-apiserver-6c6f4459b6-wgcgz" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-calico--apiserver--6c6f4459b6--wgcgz-" Dec 16 12:58:37.031345 containerd[1627]: 2025-12-16 12:58:36.818 [INFO][4863] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8b2d247abd7200eecd6ce34c4f554415490bcd476f2cf963ceb210e387d837cb" Namespace="calico-apiserver" Pod="calico-apiserver-6c6f4459b6-wgcgz" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-calico--apiserver--6c6f4459b6--wgcgz-eth0" Dec 16 12:58:37.031345 containerd[1627]: 2025-12-16 12:58:36.885 [INFO][4887] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8b2d247abd7200eecd6ce34c4f554415490bcd476f2cf963ceb210e387d837cb" HandleID="k8s-pod-network.8b2d247abd7200eecd6ce34c4f554415490bcd476f2cf963ceb210e387d837cb" Workload="srv--gtzk5.gb1.brightbox.com-k8s-calico--apiserver--6c6f4459b6--wgcgz-eth0" Dec 16 12:58:37.031345 containerd[1627]: 2025-12-16 12:58:36.886 [INFO][4887] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8b2d247abd7200eecd6ce34c4f554415490bcd476f2cf963ceb210e387d837cb" HandleID="k8s-pod-network.8b2d247abd7200eecd6ce34c4f554415490bcd476f2cf963ceb210e387d837cb" Workload="srv--gtzk5.gb1.brightbox.com-k8s-calico--apiserver--6c6f4459b6--wgcgz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103b00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-gtzk5.gb1.brightbox.com", "pod":"calico-apiserver-6c6f4459b6-wgcgz", "timestamp":"2025-12-16 12:58:36.885835857 +0000 UTC"}, Hostname:"srv-gtzk5.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:58:37.031345 containerd[1627]: 2025-12-16 12:58:36.886 [INFO][4887] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:58:37.031345 containerd[1627]: 2025-12-16 12:58:36.886 [INFO][4887] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:58:37.031345 containerd[1627]: 2025-12-16 12:58:36.886 [INFO][4887] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gtzk5.gb1.brightbox.com' Dec 16 12:58:37.031345 containerd[1627]: 2025-12-16 12:58:36.899 [INFO][4887] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8b2d247abd7200eecd6ce34c4f554415490bcd476f2cf963ceb210e387d837cb" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:37.031345 containerd[1627]: 2025-12-16 12:58:36.913 [INFO][4887] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:37.031345 containerd[1627]: 2025-12-16 12:58:36.927 [INFO][4887] ipam/ipam.go 511: Trying affinity for 192.168.21.64/26 host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:37.031345 containerd[1627]: 2025-12-16 12:58:36.932 [INFO][4887] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.64/26 host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:37.031345 containerd[1627]: 2025-12-16 12:58:36.939 [INFO][4887] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.64/26 host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:37.031345 containerd[1627]: 2025-12-16 12:58:36.939 [INFO][4887] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.21.64/26 handle="k8s-pod-network.8b2d247abd7200eecd6ce34c4f554415490bcd476f2cf963ceb210e387d837cb" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:37.031345 containerd[1627]: 2025-12-16 12:58:36.947 [INFO][4887] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8b2d247abd7200eecd6ce34c4f554415490bcd476f2cf963ceb210e387d837cb Dec 16 12:58:37.031345 containerd[1627]: 2025-12-16 12:58:36.960 [INFO][4887] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.21.64/26 handle="k8s-pod-network.8b2d247abd7200eecd6ce34c4f554415490bcd476f2cf963ceb210e387d837cb" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:37.031345 containerd[1627]: 2025-12-16 12:58:36.976 [INFO][4887] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.21.69/26] block=192.168.21.64/26 handle="k8s-pod-network.8b2d247abd7200eecd6ce34c4f554415490bcd476f2cf963ceb210e387d837cb" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:37.031345 containerd[1627]: 2025-12-16 12:58:36.976 [INFO][4887] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.69/26] handle="k8s-pod-network.8b2d247abd7200eecd6ce34c4f554415490bcd476f2cf963ceb210e387d837cb" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:37.031345 containerd[1627]: 2025-12-16 12:58:36.976 [INFO][4887] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:58:37.031345 containerd[1627]: 2025-12-16 12:58:36.976 [INFO][4887] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.21.69/26] IPv6=[] ContainerID="8b2d247abd7200eecd6ce34c4f554415490bcd476f2cf963ceb210e387d837cb" HandleID="k8s-pod-network.8b2d247abd7200eecd6ce34c4f554415490bcd476f2cf963ceb210e387d837cb" Workload="srv--gtzk5.gb1.brightbox.com-k8s-calico--apiserver--6c6f4459b6--wgcgz-eth0" Dec 16 12:58:37.035340 containerd[1627]: 2025-12-16 12:58:36.985 [INFO][4863] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8b2d247abd7200eecd6ce34c4f554415490bcd476f2cf963ceb210e387d837cb" Namespace="calico-apiserver" Pod="calico-apiserver-6c6f4459b6-wgcgz" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-calico--apiserver--6c6f4459b6--wgcgz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gtzk5.gb1.brightbox.com-k8s-calico--apiserver--6c6f4459b6--wgcgz-eth0", GenerateName:"calico-apiserver-6c6f4459b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"612befca-b93d-4468-b0c5-1d17cad065aa", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 57, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c6f4459b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gtzk5.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-6c6f4459b6-wgcgz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic2184d1c288", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:58:37.035340 containerd[1627]: 2025-12-16 12:58:36.986 [INFO][4863] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.69/32] ContainerID="8b2d247abd7200eecd6ce34c4f554415490bcd476f2cf963ceb210e387d837cb" Namespace="calico-apiserver" Pod="calico-apiserver-6c6f4459b6-wgcgz" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-calico--apiserver--6c6f4459b6--wgcgz-eth0" Dec 16 12:58:37.035340 containerd[1627]: 2025-12-16 12:58:36.986 [INFO][4863] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic2184d1c288 ContainerID="8b2d247abd7200eecd6ce34c4f554415490bcd476f2cf963ceb210e387d837cb" Namespace="calico-apiserver" Pod="calico-apiserver-6c6f4459b6-wgcgz" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-calico--apiserver--6c6f4459b6--wgcgz-eth0" Dec 16 12:58:37.035340 containerd[1627]: 2025-12-16 12:58:36.992 [INFO][4863] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8b2d247abd7200eecd6ce34c4f554415490bcd476f2cf963ceb210e387d837cb" Namespace="calico-apiserver" Pod="calico-apiserver-6c6f4459b6-wgcgz" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-calico--apiserver--6c6f4459b6--wgcgz-eth0" Dec 16 12:58:37.035340 containerd[1627]: 2025-12-16 12:58:36.992 [INFO][4863] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8b2d247abd7200eecd6ce34c4f554415490bcd476f2cf963ceb210e387d837cb" Namespace="calico-apiserver" Pod="calico-apiserver-6c6f4459b6-wgcgz" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-calico--apiserver--6c6f4459b6--wgcgz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gtzk5.gb1.brightbox.com-k8s-calico--apiserver--6c6f4459b6--wgcgz-eth0", GenerateName:"calico-apiserver-6c6f4459b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"612befca-b93d-4468-b0c5-1d17cad065aa", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 57, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c6f4459b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gtzk5.gb1.brightbox.com", ContainerID:"8b2d247abd7200eecd6ce34c4f554415490bcd476f2cf963ceb210e387d837cb", Pod:"calico-apiserver-6c6f4459b6-wgcgz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic2184d1c288", MAC:"ca:2e:4a:38:27:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:58:37.035340 containerd[1627]: 2025-12-16 12:58:37.024 [INFO][4863] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8b2d247abd7200eecd6ce34c4f554415490bcd476f2cf963ceb210e387d837cb" Namespace="calico-apiserver" Pod="calico-apiserver-6c6f4459b6-wgcgz" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-calico--apiserver--6c6f4459b6--wgcgz-eth0" Dec 16 12:58:37.042148 kubelet[2968]: I1216 12:58:37.039458 2968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-5rnz2" podStartSLOduration=61.023651826 podStartE2EDuration="1m1.023651826s" podCreationTimestamp="2025-12-16 12:57:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:58:36.554598109 +0000 UTC m=+67.137172151" watchObservedRunningTime="2025-12-16 12:58:37.023651826 +0000 UTC m=+67.606225837" Dec 16 12:58:37.124325 systemd-networkd[1550]: calic924d4d61ba: Link UP Dec 16 12:58:37.125665 containerd[1627]: time="2025-12-16T12:58:37.125431929Z" level=info msg="connecting to shim 8b2d247abd7200eecd6ce34c4f554415490bcd476f2cf963ceb210e387d837cb" address="unix:///run/containerd/s/3ad539907c31dc98d5bf164dbe9134598105d941cf2d97e3f729877c415d6262" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:58:37.121000 audit[4911]: NETFILTER_CFG table=filter:131 family=2 entries=68 op=nft_register_chain pid=4911 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:58:37.121000 audit[4911]: SYSCALL arch=c000003e syscall=46 success=yes exit=34624 a0=3 a1=7ffd11befd10 a2=0 a3=7ffd11befcfc items=0 ppid=4456 pid=4911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:37.121000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:58:37.128227 systemd-networkd[1550]: calic924d4d61ba: Gained carrier Dec 16 12:58:37.165931 containerd[1627]: 2025-12-16 12:58:36.836 [INFO][4867] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gtzk5.gb1.brightbox.com-k8s-goldmane--666569f655--hjfvx-eth0 goldmane-666569f655- calico-system 5f28fd1d-daa3-4b1a-9808-93af3076e192 880 0 2025-12-16 12:57:52 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-gtzk5.gb1.brightbox.com goldmane-666569f655-hjfvx eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic924d4d61ba [] [] }} ContainerID="151c41b43cee8ae75993c23b9bb6e40c6e6c9a60390c2ced61964f554baf40e8" Namespace="calico-system" Pod="goldmane-666569f655-hjfvx" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-goldmane--666569f655--hjfvx-" Dec 16 12:58:37.165931 containerd[1627]: 2025-12-16 12:58:36.836 [INFO][4867] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="151c41b43cee8ae75993c23b9bb6e40c6e6c9a60390c2ced61964f554baf40e8" Namespace="calico-system" Pod="goldmane-666569f655-hjfvx" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-goldmane--666569f655--hjfvx-eth0" Dec 16 12:58:37.165931 containerd[1627]: 2025-12-16 12:58:36.899 [INFO][4892] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="151c41b43cee8ae75993c23b9bb6e40c6e6c9a60390c2ced61964f554baf40e8" HandleID="k8s-pod-network.151c41b43cee8ae75993c23b9bb6e40c6e6c9a60390c2ced61964f554baf40e8" Workload="srv--gtzk5.gb1.brightbox.com-k8s-goldmane--666569f655--hjfvx-eth0" Dec 16 12:58:37.165931 containerd[1627]: 2025-12-16 12:58:36.899 [INFO][4892] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="151c41b43cee8ae75993c23b9bb6e40c6e6c9a60390c2ced61964f554baf40e8" HandleID="k8s-pod-network.151c41b43cee8ae75993c23b9bb6e40c6e6c9a60390c2ced61964f554baf40e8" Workload="srv--gtzk5.gb1.brightbox.com-k8s-goldmane--666569f655--hjfvx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf930), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-gtzk5.gb1.brightbox.com", "pod":"goldmane-666569f655-hjfvx", "timestamp":"2025-12-16 12:58:36.899324691 +0000 UTC"}, Hostname:"srv-gtzk5.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:58:37.165931 containerd[1627]: 2025-12-16 12:58:36.899 [INFO][4892] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:58:37.165931 containerd[1627]: 2025-12-16 12:58:36.976 [INFO][4892] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:58:37.165931 containerd[1627]: 2025-12-16 12:58:36.976 [INFO][4892] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gtzk5.gb1.brightbox.com' Dec 16 12:58:37.165931 containerd[1627]: 2025-12-16 12:58:37.000 [INFO][4892] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.151c41b43cee8ae75993c23b9bb6e40c6e6c9a60390c2ced61964f554baf40e8" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:37.165931 containerd[1627]: 2025-12-16 12:58:37.016 [INFO][4892] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:37.165931 containerd[1627]: 2025-12-16 12:58:37.033 [INFO][4892] ipam/ipam.go 511: Trying affinity for 192.168.21.64/26 host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:37.165931 containerd[1627]: 2025-12-16 12:58:37.040 [INFO][4892] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.64/26 host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:37.165931 containerd[1627]: 2025-12-16 12:58:37.046 [INFO][4892] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.64/26 host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:37.165931 containerd[1627]: 2025-12-16 12:58:37.047 [INFO][4892] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.21.64/26 handle="k8s-pod-network.151c41b43cee8ae75993c23b9bb6e40c6e6c9a60390c2ced61964f554baf40e8" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:37.165931 containerd[1627]: 2025-12-16 12:58:37.053 [INFO][4892] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.151c41b43cee8ae75993c23b9bb6e40c6e6c9a60390c2ced61964f554baf40e8 Dec 16 12:58:37.165931 containerd[1627]: 2025-12-16 12:58:37.059 [INFO][4892] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.21.64/26 handle="k8s-pod-network.151c41b43cee8ae75993c23b9bb6e40c6e6c9a60390c2ced61964f554baf40e8" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:37.165931 containerd[1627]: 2025-12-16 12:58:37.087 [INFO][4892] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.21.70/26] block=192.168.21.64/26 handle="k8s-pod-network.151c41b43cee8ae75993c23b9bb6e40c6e6c9a60390c2ced61964f554baf40e8" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:37.165931 containerd[1627]: 2025-12-16 12:58:37.087 [INFO][4892] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.70/26] handle="k8s-pod-network.151c41b43cee8ae75993c23b9bb6e40c6e6c9a60390c2ced61964f554baf40e8" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:37.165931 containerd[1627]: 2025-12-16 12:58:37.087 [INFO][4892] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:58:37.165931 containerd[1627]: 2025-12-16 12:58:37.088 [INFO][4892] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.21.70/26] IPv6=[] ContainerID="151c41b43cee8ae75993c23b9bb6e40c6e6c9a60390c2ced61964f554baf40e8" HandleID="k8s-pod-network.151c41b43cee8ae75993c23b9bb6e40c6e6c9a60390c2ced61964f554baf40e8" Workload="srv--gtzk5.gb1.brightbox.com-k8s-goldmane--666569f655--hjfvx-eth0" Dec 16 12:58:37.171233 containerd[1627]: 2025-12-16 12:58:37.095 [INFO][4867] cni-plugin/k8s.go 418: Populated endpoint ContainerID="151c41b43cee8ae75993c23b9bb6e40c6e6c9a60390c2ced61964f554baf40e8" Namespace="calico-system" Pod="goldmane-666569f655-hjfvx" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-goldmane--666569f655--hjfvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gtzk5.gb1.brightbox.com-k8s-goldmane--666569f655--hjfvx-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"5f28fd1d-daa3-4b1a-9808-93af3076e192", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 57, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gtzk5.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-666569f655-hjfvx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.21.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic924d4d61ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:58:37.171233 containerd[1627]: 2025-12-16 12:58:37.095 [INFO][4867] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.70/32] ContainerID="151c41b43cee8ae75993c23b9bb6e40c6e6c9a60390c2ced61964f554baf40e8" Namespace="calico-system" Pod="goldmane-666569f655-hjfvx" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-goldmane--666569f655--hjfvx-eth0" Dec 16 12:58:37.171233 containerd[1627]: 2025-12-16 12:58:37.095 [INFO][4867] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic924d4d61ba ContainerID="151c41b43cee8ae75993c23b9bb6e40c6e6c9a60390c2ced61964f554baf40e8" Namespace="calico-system" Pod="goldmane-666569f655-hjfvx" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-goldmane--666569f655--hjfvx-eth0" Dec 16 12:58:37.171233 containerd[1627]: 2025-12-16 12:58:37.130 [INFO][4867] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="151c41b43cee8ae75993c23b9bb6e40c6e6c9a60390c2ced61964f554baf40e8" Namespace="calico-system" Pod="goldmane-666569f655-hjfvx" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-goldmane--666569f655--hjfvx-eth0" Dec 16 12:58:37.171233 containerd[1627]: 2025-12-16 12:58:37.131 [INFO][4867] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="151c41b43cee8ae75993c23b9bb6e40c6e6c9a60390c2ced61964f554baf40e8" Namespace="calico-system" Pod="goldmane-666569f655-hjfvx" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-goldmane--666569f655--hjfvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gtzk5.gb1.brightbox.com-k8s-goldmane--666569f655--hjfvx-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"5f28fd1d-daa3-4b1a-9808-93af3076e192", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 57, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gtzk5.gb1.brightbox.com", ContainerID:"151c41b43cee8ae75993c23b9bb6e40c6e6c9a60390c2ced61964f554baf40e8", Pod:"goldmane-666569f655-hjfvx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.21.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic924d4d61ba", MAC:"3e:95:eb:a3:06:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:58:37.171233 containerd[1627]: 2025-12-16 12:58:37.154 [INFO][4867] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="151c41b43cee8ae75993c23b9bb6e40c6e6c9a60390c2ced61964f554baf40e8" Namespace="calico-system" Pod="goldmane-666569f655-hjfvx" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-goldmane--666569f655--hjfvx-eth0" Dec 16 12:58:37.191000 audit[4942]: NETFILTER_CFG table=filter:132 family=2 entries=48 op=nft_register_chain pid=4942 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:58:37.191000 audit[4942]: SYSCALL arch=c000003e syscall=46 success=yes exit=26388 a0=3 a1=7fffdd07c470 a2=0 a3=7fffdd07c45c items=0 ppid=4456 pid=4942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:37.191000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:58:37.239319 containerd[1627]: time="2025-12-16T12:58:37.239076763Z" level=info msg="connecting to shim 151c41b43cee8ae75993c23b9bb6e40c6e6c9a60390c2ced61964f554baf40e8" address="unix:///run/containerd/s/9e40442d5f00914621238da81d6ae198da27d4c2527e39bcab682bbb20b935e7" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:58:37.244461 systemd[1]: Started cri-containerd-8b2d247abd7200eecd6ce34c4f554415490bcd476f2cf963ceb210e387d837cb.scope - libcontainer container 8b2d247abd7200eecd6ce34c4f554415490bcd476f2cf963ceb210e387d837cb. Dec 16 12:58:37.279000 audit: BPF prog-id=235 op=LOAD Dec 16 12:58:37.281000 audit: BPF prog-id=236 op=LOAD Dec 16 12:58:37.281000 audit[4932]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4921 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:37.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862326432343761626437323030656563643663653334633466353534 Dec 16 12:58:37.281000 audit: BPF prog-id=236 op=UNLOAD Dec 16 12:58:37.281000 audit[4932]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4921 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:37.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862326432343761626437323030656563643663653334633466353534 Dec 16 12:58:37.282000 audit: BPF prog-id=237 op=LOAD Dec 16 12:58:37.282000 audit[4932]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4921 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:37.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862326432343761626437323030656563643663653334633466353534 Dec 16 12:58:37.284000 audit: BPF prog-id=238 op=LOAD Dec 16 12:58:37.284000 audit[4932]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4921 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:37.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862326432343761626437323030656563643663653334633466353534 Dec 16 12:58:37.284000 audit: BPF prog-id=238 op=UNLOAD Dec 16 12:58:37.284000 audit[4932]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4921 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:37.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862326432343761626437323030656563643663653334633466353534 Dec 16 12:58:37.284000 audit: BPF prog-id=237 op=UNLOAD Dec 16 12:58:37.284000 audit[4932]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4921 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:37.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862326432343761626437323030656563643663653334633466353534 Dec 16 12:58:37.285000 audit: BPF prog-id=239 op=LOAD Dec 16 12:58:37.285000 audit[4932]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4921 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:37.285000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862326432343761626437323030656563643663653334633466353534 Dec 16 12:58:37.293433 systemd[1]: Started cri-containerd-151c41b43cee8ae75993c23b9bb6e40c6e6c9a60390c2ced61964f554baf40e8.scope - libcontainer container 151c41b43cee8ae75993c23b9bb6e40c6e6c9a60390c2ced61964f554baf40e8. Dec 16 12:58:37.322000 audit: BPF prog-id=240 op=LOAD Dec 16 12:58:37.323000 audit: BPF prog-id=241 op=LOAD Dec 16 12:58:37.323000 audit[4974]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4963 pid=4974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:37.323000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135316334316234336365653861653735393933633233623962623665 Dec 16 12:58:37.323000 audit: BPF prog-id=241 op=UNLOAD Dec 16 12:58:37.323000 audit[4974]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4963 pid=4974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:37.323000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135316334316234336365653861653735393933633233623962623665 Dec 16 12:58:37.323000 audit: BPF prog-id=242 op=LOAD Dec 16 12:58:37.323000 audit[4974]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4963 pid=4974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:37.323000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135316334316234336365653861653735393933633233623962623665 Dec 16 12:58:37.323000 audit: BPF prog-id=243 op=LOAD Dec 16 12:58:37.323000 audit[4974]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4963 pid=4974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:37.323000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135316334316234336365653861653735393933633233623962623665 Dec 16 12:58:37.323000 audit: BPF prog-id=243 op=UNLOAD Dec 16 12:58:37.323000 audit[4974]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4963 pid=4974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:37.323000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135316334316234336365653861653735393933633233623962623665 Dec 16 12:58:37.323000 audit: BPF prog-id=242 op=UNLOAD Dec 16 12:58:37.323000 audit[4974]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4963 pid=4974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:37.323000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135316334316234336365653861653735393933633233623962623665 Dec 16 12:58:37.323000 audit: BPF prog-id=244 op=LOAD Dec 16 12:58:37.323000 audit[4974]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4963 pid=4974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:37.323000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135316334316234336365653861653735393933633233623962623665 Dec 16 12:58:37.362407 containerd[1627]: time="2025-12-16T12:58:37.362357969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6f4459b6-wgcgz,Uid:612befca-b93d-4468-b0c5-1d17cad065aa,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8b2d247abd7200eecd6ce34c4f554415490bcd476f2cf963ceb210e387d837cb\"" Dec 16 12:58:37.367133 containerd[1627]: time="2025-12-16T12:58:37.367044658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:58:37.423016 containerd[1627]: time="2025-12-16T12:58:37.421229259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hjfvx,Uid:5f28fd1d-daa3-4b1a-9808-93af3076e192,Namespace:calico-system,Attempt:0,} returns sandbox id \"151c41b43cee8ae75993c23b9bb6e40c6e6c9a60390c2ced61964f554baf40e8\"" Dec 16 12:58:37.565000 audit[5018]: NETFILTER_CFG table=filter:133 family=2 entries=17 op=nft_register_rule pid=5018 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:37.565000 audit[5018]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffceb14b9e0 a2=0 a3=7ffceb14b9cc items=0 ppid=3129 pid=5018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:37.565000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:37.569000 audit[5018]: NETFILTER_CFG table=nat:134 family=2 entries=35 op=nft_register_chain pid=5018 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:37.569000 audit[5018]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffceb14b9e0 a2=0 a3=7ffceb14b9cc items=0 ppid=3129 pid=5018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:37.569000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:37.675932 containerd[1627]: time="2025-12-16T12:58:37.675752108Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:58:37.677360 containerd[1627]: time="2025-12-16T12:58:37.677295578Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:58:37.677452 containerd[1627]: time="2025-12-16T12:58:37.677417488Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:58:37.677757 kubelet[2968]: E1216 12:58:37.677686 2968 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:58:37.679095 kubelet[2968]: E1216 12:58:37.677772 2968 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:58:37.679293 kubelet[2968]: E1216 12:58:37.678927 2968 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqwlc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c6f4459b6-wgcgz_calico-apiserver(612befca-b93d-4468-b0c5-1d17cad065aa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:58:37.679646 containerd[1627]: time="2025-12-16T12:58:37.679533064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:58:37.680441 kubelet[2968]: E1216 12:58:37.680389 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6f4459b6-wgcgz" podUID="612befca-b93d-4468-b0c5-1d17cad065aa" Dec 16 12:58:37.868305 systemd-networkd[1550]: cali7b9c97c3863: Gained IPv6LL Dec 16 12:58:38.009387 containerd[1627]: time="2025-12-16T12:58:38.009159718Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:58:38.013872 containerd[1627]: time="2025-12-16T12:58:38.013806568Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:58:38.013965 containerd[1627]: time="2025-12-16T12:58:38.013915310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:58:38.014166 kubelet[2968]: E1216 12:58:38.014105 2968 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:58:38.014241 kubelet[2968]: E1216 12:58:38.014175 2968 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:58:38.014443 kubelet[2968]: E1216 12:58:38.014360 2968 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8d696,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-hjfvx_calico-system(5f28fd1d-daa3-4b1a-9808-93af3076e192): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:58:38.016096 kubelet[2968]: E1216 12:58:38.016046 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hjfvx" podUID="5f28fd1d-daa3-4b1a-9808-93af3076e192" Dec 16 12:58:38.252244 systemd-networkd[1550]: calic2184d1c288: Gained IPv6LL Dec 16 12:58:38.538069 kubelet[2968]: E1216 12:58:38.536991 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6f4459b6-wgcgz" podUID="612befca-b93d-4468-b0c5-1d17cad065aa" Dec 16 12:58:38.540271 kubelet[2968]: E1216 12:58:38.539441 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hjfvx" podUID="5f28fd1d-daa3-4b1a-9808-93af3076e192" Dec 16 12:58:38.626000 audit[5022]: NETFILTER_CFG table=filter:135 family=2 entries=14 op=nft_register_rule pid=5022 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:38.626000 audit[5022]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe81b27390 a2=0 a3=7ffe81b2737c items=0 ppid=3129 pid=5022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:38.626000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:38.636207 systemd-networkd[1550]: calic924d4d61ba: Gained IPv6LL Dec 16 12:58:38.639000 audit[5022]: NETFILTER_CFG table=nat:136 family=2 entries=20 op=nft_register_rule pid=5022 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:38.639000 audit[5022]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe81b27390 a2=0 a3=7ffe81b2737c items=0 ppid=3129 pid=5022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:38.639000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:38.653000 audit[5024]: NETFILTER_CFG table=filter:137 family=2 entries=14 op=nft_register_rule pid=5024 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:38.653000 audit[5024]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffff6253480 a2=0 a3=7ffff625346c items=0 ppid=3129 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:38.653000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:38.661000 audit[5024]: NETFILTER_CFG table=nat:138 family=2 entries=20 op=nft_register_rule pid=5024 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:38.661000 audit[5024]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffff6253480 a2=0 a3=7ffff625346c items=0 ppid=3129 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:38.661000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:38.738626 containerd[1627]: time="2025-12-16T12:58:38.738551446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l84nt,Uid:d0482416-431b-468e-8a2a-835e52be2ad8,Namespace:kube-system,Attempt:0,}" Dec 16 12:58:38.895788 systemd-networkd[1550]: cali3c0322be215: Link UP Dec 16 12:58:38.896124 systemd-networkd[1550]: cali3c0322be215: Gained carrier Dec 16 12:58:38.922990 containerd[1627]: 2025-12-16 12:58:38.795 [INFO][5026] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gtzk5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--l84nt-eth0 coredns-674b8bbfcf- kube-system d0482416-431b-468e-8a2a-835e52be2ad8 878 0 2025-12-16 12:57:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-gtzk5.gb1.brightbox.com coredns-674b8bbfcf-l84nt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3c0322be215 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="23f06da3104a6cb22f00e20271c4878d97797a2946f4463b5f66b28301bbb745" Namespace="kube-system" Pod="coredns-674b8bbfcf-l84nt" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--l84nt-" Dec 16 12:58:38.922990 containerd[1627]: 2025-12-16 12:58:38.795 [INFO][5026] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="23f06da3104a6cb22f00e20271c4878d97797a2946f4463b5f66b28301bbb745" Namespace="kube-system" Pod="coredns-674b8bbfcf-l84nt" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--l84nt-eth0" Dec 16 12:58:38.922990 containerd[1627]: 2025-12-16 12:58:38.833 [INFO][5037] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="23f06da3104a6cb22f00e20271c4878d97797a2946f4463b5f66b28301bbb745" HandleID="k8s-pod-network.23f06da3104a6cb22f00e20271c4878d97797a2946f4463b5f66b28301bbb745" Workload="srv--gtzk5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--l84nt-eth0" Dec 16 12:58:38.922990 containerd[1627]: 2025-12-16 12:58:38.833 [INFO][5037] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="23f06da3104a6cb22f00e20271c4878d97797a2946f4463b5f66b28301bbb745" HandleID="k8s-pod-network.23f06da3104a6cb22f00e20271c4878d97797a2946f4463b5f66b28301bbb745" Workload="srv--gtzk5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--l84nt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-gtzk5.gb1.brightbox.com", "pod":"coredns-674b8bbfcf-l84nt", "timestamp":"2025-12-16 12:58:38.833483899 +0000 UTC"}, Hostname:"srv-gtzk5.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:58:38.922990 containerd[1627]: 2025-12-16 12:58:38.833 [INFO][5037] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:58:38.922990 containerd[1627]: 2025-12-16 12:58:38.833 [INFO][5037] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:58:38.922990 containerd[1627]: 2025-12-16 12:58:38.833 [INFO][5037] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gtzk5.gb1.brightbox.com' Dec 16 12:58:38.922990 containerd[1627]: 2025-12-16 12:58:38.845 [INFO][5037] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.23f06da3104a6cb22f00e20271c4878d97797a2946f4463b5f66b28301bbb745" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:38.922990 containerd[1627]: 2025-12-16 12:58:38.853 [INFO][5037] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:38.922990 containerd[1627]: 2025-12-16 12:58:38.860 [INFO][5037] ipam/ipam.go 511: Trying affinity for 192.168.21.64/26 host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:38.922990 containerd[1627]: 2025-12-16 12:58:38.863 [INFO][5037] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.64/26 host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:38.922990 containerd[1627]: 2025-12-16 12:58:38.868 [INFO][5037] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.64/26 host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:38.922990 containerd[1627]: 2025-12-16 12:58:38.868 [INFO][5037] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.21.64/26 handle="k8s-pod-network.23f06da3104a6cb22f00e20271c4878d97797a2946f4463b5f66b28301bbb745" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:38.922990 containerd[1627]: 2025-12-16 12:58:38.871 [INFO][5037] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.23f06da3104a6cb22f00e20271c4878d97797a2946f4463b5f66b28301bbb745 Dec 16 12:58:38.922990 containerd[1627]: 2025-12-16 12:58:38.878 [INFO][5037] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.21.64/26 handle="k8s-pod-network.23f06da3104a6cb22f00e20271c4878d97797a2946f4463b5f66b28301bbb745" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:38.922990 containerd[1627]: 2025-12-16 12:58:38.888 [INFO][5037] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.21.71/26] block=192.168.21.64/26 handle="k8s-pod-network.23f06da3104a6cb22f00e20271c4878d97797a2946f4463b5f66b28301bbb745" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:38.922990 containerd[1627]: 2025-12-16 12:58:38.888 [INFO][5037] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.71/26] handle="k8s-pod-network.23f06da3104a6cb22f00e20271c4878d97797a2946f4463b5f66b28301bbb745" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:38.922990 containerd[1627]: 2025-12-16 12:58:38.888 [INFO][5037] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:58:38.922990 containerd[1627]: 2025-12-16 12:58:38.888 [INFO][5037] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.21.71/26] IPv6=[] ContainerID="23f06da3104a6cb22f00e20271c4878d97797a2946f4463b5f66b28301bbb745" HandleID="k8s-pod-network.23f06da3104a6cb22f00e20271c4878d97797a2946f4463b5f66b28301bbb745" Workload="srv--gtzk5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--l84nt-eth0" Dec 16 12:58:38.924217 containerd[1627]: 2025-12-16 12:58:38.891 [INFO][5026] cni-plugin/k8s.go 418: Populated endpoint ContainerID="23f06da3104a6cb22f00e20271c4878d97797a2946f4463b5f66b28301bbb745" Namespace="kube-system" Pod="coredns-674b8bbfcf-l84nt" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--l84nt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gtzk5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--l84nt-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d0482416-431b-468e-8a2a-835e52be2ad8", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 57, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gtzk5.gb1.brightbox.com", ContainerID:"", Pod:"coredns-674b8bbfcf-l84nt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c0322be215", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:58:38.924217 containerd[1627]: 2025-12-16 12:58:38.891 [INFO][5026] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.71/32] ContainerID="23f06da3104a6cb22f00e20271c4878d97797a2946f4463b5f66b28301bbb745" Namespace="kube-system" Pod="coredns-674b8bbfcf-l84nt" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--l84nt-eth0" Dec 16 12:58:38.924217 containerd[1627]: 2025-12-16 12:58:38.892 [INFO][5026] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c0322be215 ContainerID="23f06da3104a6cb22f00e20271c4878d97797a2946f4463b5f66b28301bbb745" Namespace="kube-system" Pod="coredns-674b8bbfcf-l84nt" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--l84nt-eth0" Dec 16 12:58:38.924217 containerd[1627]: 2025-12-16 12:58:38.897 [INFO][5026] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="23f06da3104a6cb22f00e20271c4878d97797a2946f4463b5f66b28301bbb745" Namespace="kube-system" Pod="coredns-674b8bbfcf-l84nt" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--l84nt-eth0" Dec 16 12:58:38.924217 containerd[1627]: 2025-12-16 12:58:38.897 [INFO][5026] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="23f06da3104a6cb22f00e20271c4878d97797a2946f4463b5f66b28301bbb745" Namespace="kube-system" Pod="coredns-674b8bbfcf-l84nt" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--l84nt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gtzk5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--l84nt-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d0482416-431b-468e-8a2a-835e52be2ad8", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 57, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gtzk5.gb1.brightbox.com", ContainerID:"23f06da3104a6cb22f00e20271c4878d97797a2946f4463b5f66b28301bbb745", Pod:"coredns-674b8bbfcf-l84nt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c0322be215", MAC:"2e:3f:4e:90:d4:c8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:58:38.924217 containerd[1627]: 2025-12-16 12:58:38.916 [INFO][5026] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="23f06da3104a6cb22f00e20271c4878d97797a2946f4463b5f66b28301bbb745" Namespace="kube-system" Pod="coredns-674b8bbfcf-l84nt" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-coredns--674b8bbfcf--l84nt-eth0" Dec 16 12:58:38.969924 containerd[1627]: time="2025-12-16T12:58:38.969756560Z" level=info msg="connecting to shim 23f06da3104a6cb22f00e20271c4878d97797a2946f4463b5f66b28301bbb745" address="unix:///run/containerd/s/ff02acc38b99451cf69316187dce7b3424cdf69cee15c22056df5157f61c62d5" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:58:38.968000 audit[5058]: NETFILTER_CFG table=filter:139 family=2 entries=44 op=nft_register_chain pid=5058 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:58:38.968000 audit[5058]: SYSCALL arch=c000003e syscall=46 success=yes exit=21516 a0=3 a1=7ffff8a75270 a2=0 a3=7ffff8a7525c items=0 ppid=4456 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:38.968000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:58:39.031917 systemd[1]: Started cri-containerd-23f06da3104a6cb22f00e20271c4878d97797a2946f4463b5f66b28301bbb745.scope - libcontainer container 23f06da3104a6cb22f00e20271c4878d97797a2946f4463b5f66b28301bbb745. Dec 16 12:58:39.059000 audit: BPF prog-id=245 op=LOAD Dec 16 12:58:39.060000 audit: BPF prog-id=246 op=LOAD Dec 16 12:58:39.060000 audit[5075]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5064 pid=5075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:39.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233663036646133313034613663623232663030653230323731633438 Dec 16 12:58:39.060000 audit: BPF prog-id=246 op=UNLOAD Dec 16 12:58:39.060000 audit[5075]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5064 pid=5075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:39.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233663036646133313034613663623232663030653230323731633438 Dec 16 12:58:39.060000 audit: BPF prog-id=247 op=LOAD Dec 16 12:58:39.060000 audit[5075]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5064 pid=5075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:39.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233663036646133313034613663623232663030653230323731633438 Dec 16 12:58:39.061000 audit: BPF prog-id=248 op=LOAD Dec 16 12:58:39.061000 audit[5075]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5064 pid=5075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:39.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233663036646133313034613663623232663030653230323731633438 Dec 16 12:58:39.061000 audit: BPF prog-id=248 op=UNLOAD Dec 16 12:58:39.061000 audit[5075]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5064 pid=5075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:39.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233663036646133313034613663623232663030653230323731633438 Dec 16 12:58:39.061000 audit: BPF prog-id=247 op=UNLOAD Dec 16 12:58:39.061000 audit[5075]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5064 pid=5075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:39.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233663036646133313034613663623232663030653230323731633438 Dec 16 12:58:39.061000 audit: BPF prog-id=249 op=LOAD Dec 16 12:58:39.061000 audit[5075]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5064 pid=5075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:39.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233663036646133313034613663623232663030653230323731633438 Dec 16 12:58:39.115587 containerd[1627]: time="2025-12-16T12:58:39.115528957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l84nt,Uid:d0482416-431b-468e-8a2a-835e52be2ad8,Namespace:kube-system,Attempt:0,} returns sandbox id \"23f06da3104a6cb22f00e20271c4878d97797a2946f4463b5f66b28301bbb745\"" Dec 16 12:58:39.123679 containerd[1627]: time="2025-12-16T12:58:39.123324255Z" level=info msg="CreateContainer within sandbox \"23f06da3104a6cb22f00e20271c4878d97797a2946f4463b5f66b28301bbb745\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:58:39.134834 containerd[1627]: time="2025-12-16T12:58:39.134794647Z" level=info msg="Container 7506ea3892098fc87861b332ab141fc31487ce04b937a0a711cf7a779e83a4d9: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:58:39.142594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3355966211.mount: Deactivated successfully. Dec 16 12:58:39.151741 containerd[1627]: time="2025-12-16T12:58:39.151591565Z" level=info msg="CreateContainer within sandbox \"23f06da3104a6cb22f00e20271c4878d97797a2946f4463b5f66b28301bbb745\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7506ea3892098fc87861b332ab141fc31487ce04b937a0a711cf7a779e83a4d9\"" Dec 16 12:58:39.154458 containerd[1627]: time="2025-12-16T12:58:39.154037012Z" level=info msg="StartContainer for \"7506ea3892098fc87861b332ab141fc31487ce04b937a0a711cf7a779e83a4d9\"" Dec 16 12:58:39.155842 containerd[1627]: time="2025-12-16T12:58:39.155807755Z" level=info msg="connecting to shim 7506ea3892098fc87861b332ab141fc31487ce04b937a0a711cf7a779e83a4d9" address="unix:///run/containerd/s/ff02acc38b99451cf69316187dce7b3424cdf69cee15c22056df5157f61c62d5" protocol=ttrpc version=3 Dec 16 12:58:39.180211 systemd[1]: Started cri-containerd-7506ea3892098fc87861b332ab141fc31487ce04b937a0a711cf7a779e83a4d9.scope - libcontainer container 7506ea3892098fc87861b332ab141fc31487ce04b937a0a711cf7a779e83a4d9. Dec 16 12:58:39.199000 audit: BPF prog-id=250 op=LOAD Dec 16 12:58:39.200000 audit: BPF prog-id=251 op=LOAD Dec 16 12:58:39.200000 audit[5103]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=5064 pid=5103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:39.200000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735303665613338393230393866633837383631623333326162313431 Dec 16 12:58:39.200000 audit: BPF prog-id=251 op=UNLOAD Dec 16 12:58:39.200000 audit[5103]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5064 pid=5103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:39.200000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735303665613338393230393866633837383631623333326162313431 Dec 16 12:58:39.201000 audit: BPF prog-id=252 op=LOAD Dec 16 12:58:39.201000 audit[5103]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=5064 pid=5103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:39.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735303665613338393230393866633837383631623333326162313431 Dec 16 12:58:39.201000 audit: BPF prog-id=253 op=LOAD Dec 16 12:58:39.201000 audit[5103]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=5064 pid=5103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:39.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735303665613338393230393866633837383631623333326162313431 Dec 16 12:58:39.201000 audit: BPF prog-id=253 op=UNLOAD Dec 16 12:58:39.201000 audit[5103]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5064 pid=5103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:39.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735303665613338393230393866633837383631623333326162313431 Dec 16 12:58:39.201000 audit: BPF prog-id=252 op=UNLOAD Dec 16 12:58:39.201000 audit[5103]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5064 pid=5103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:39.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735303665613338393230393866633837383631623333326162313431 Dec 16 12:58:39.201000 audit: BPF prog-id=254 op=LOAD Dec 16 12:58:39.201000 audit[5103]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=5064 pid=5103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:39.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735303665613338393230393866633837383631623333326162313431 Dec 16 12:58:39.245133 containerd[1627]: time="2025-12-16T12:58:39.245081628Z" level=info msg="StartContainer for \"7506ea3892098fc87861b332ab141fc31487ce04b937a0a711cf7a779e83a4d9\" returns successfully" Dec 16 12:58:39.559373 kubelet[2968]: I1216 12:58:39.559089 2968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-l84nt" podStartSLOduration=63.559061161 podStartE2EDuration="1m3.559061161s" podCreationTimestamp="2025-12-16 12:57:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:58:39.556333749 +0000 UTC m=+70.138907770" watchObservedRunningTime="2025-12-16 12:58:39.559061161 +0000 UTC m=+70.141635177" Dec 16 12:58:39.626000 audit[5141]: NETFILTER_CFG table=filter:140 family=2 entries=14 op=nft_register_rule pid=5141 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:39.626000 audit[5141]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffedf718f0 a2=0 a3=7fffedf718dc items=0 ppid=3129 pid=5141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:39.626000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:39.646000 audit[5141]: NETFILTER_CFG table=nat:141 family=2 entries=56 op=nft_register_chain pid=5141 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:39.646000 audit[5141]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fffedf718f0 a2=0 a3=7fffedf718dc items=0 ppid=3129 pid=5141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:39.646000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:39.731584 containerd[1627]: time="2025-12-16T12:58:39.731428441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6f4459b6-4w99m,Uid:f26c7895-de48-48b9-98b3-5ed0a263683c,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:58:39.975644 systemd-networkd[1550]: cali856c756fa32: Link UP Dec 16 12:58:39.976679 systemd-networkd[1550]: cali856c756fa32: Gained carrier Dec 16 12:58:40.015550 containerd[1627]: 2025-12-16 12:58:39.820 [INFO][5144] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gtzk5.gb1.brightbox.com-k8s-calico--apiserver--6c6f4459b6--4w99m-eth0 calico-apiserver-6c6f4459b6- calico-apiserver f26c7895-de48-48b9-98b3-5ed0a263683c 873 0 2025-12-16 12:57:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c6f4459b6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-gtzk5.gb1.brightbox.com calico-apiserver-6c6f4459b6-4w99m eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali856c756fa32 [] [] }} ContainerID="df8d279c80396a3a78ba5e6bd66693c70c75b1c006f9ba8b23c9ed44c8d7df0c" Namespace="calico-apiserver" Pod="calico-apiserver-6c6f4459b6-4w99m" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-calico--apiserver--6c6f4459b6--4w99m-" Dec 16 12:58:40.015550 containerd[1627]: 2025-12-16 12:58:39.820 [INFO][5144] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="df8d279c80396a3a78ba5e6bd66693c70c75b1c006f9ba8b23c9ed44c8d7df0c" Namespace="calico-apiserver" Pod="calico-apiserver-6c6f4459b6-4w99m" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-calico--apiserver--6c6f4459b6--4w99m-eth0" Dec 16 12:58:40.015550 containerd[1627]: 2025-12-16 12:58:39.870 [INFO][5155] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df8d279c80396a3a78ba5e6bd66693c70c75b1c006f9ba8b23c9ed44c8d7df0c" HandleID="k8s-pod-network.df8d279c80396a3a78ba5e6bd66693c70c75b1c006f9ba8b23c9ed44c8d7df0c" Workload="srv--gtzk5.gb1.brightbox.com-k8s-calico--apiserver--6c6f4459b6--4w99m-eth0" Dec 16 12:58:40.015550 containerd[1627]: 2025-12-16 12:58:39.870 [INFO][5155] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="df8d279c80396a3a78ba5e6bd66693c70c75b1c006f9ba8b23c9ed44c8d7df0c" HandleID="k8s-pod-network.df8d279c80396a3a78ba5e6bd66693c70c75b1c006f9ba8b23c9ed44c8d7df0c" Workload="srv--gtzk5.gb1.brightbox.com-k8s-calico--apiserver--6c6f4459b6--4w99m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f260), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-gtzk5.gb1.brightbox.com", "pod":"calico-apiserver-6c6f4459b6-4w99m", "timestamp":"2025-12-16 12:58:39.869991972 +0000 UTC"}, Hostname:"srv-gtzk5.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:58:40.015550 containerd[1627]: 2025-12-16 12:58:39.870 [INFO][5155] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:58:40.015550 containerd[1627]: 2025-12-16 12:58:39.870 [INFO][5155] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:58:40.015550 containerd[1627]: 2025-12-16 12:58:39.870 [INFO][5155] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gtzk5.gb1.brightbox.com' Dec 16 12:58:40.015550 containerd[1627]: 2025-12-16 12:58:39.895 [INFO][5155] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.df8d279c80396a3a78ba5e6bd66693c70c75b1c006f9ba8b23c9ed44c8d7df0c" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:40.015550 containerd[1627]: 2025-12-16 12:58:39.916 [INFO][5155] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:40.015550 containerd[1627]: 2025-12-16 12:58:39.924 [INFO][5155] ipam/ipam.go 511: Trying affinity for 192.168.21.64/26 host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:40.015550 containerd[1627]: 2025-12-16 12:58:39.930 [INFO][5155] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.64/26 host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:40.015550 containerd[1627]: 2025-12-16 12:58:39.936 [INFO][5155] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.64/26 host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:40.015550 containerd[1627]: 2025-12-16 12:58:39.936 [INFO][5155] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.21.64/26 handle="k8s-pod-network.df8d279c80396a3a78ba5e6bd66693c70c75b1c006f9ba8b23c9ed44c8d7df0c" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:40.015550 containerd[1627]: 2025-12-16 12:58:39.941 [INFO][5155] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.df8d279c80396a3a78ba5e6bd66693c70c75b1c006f9ba8b23c9ed44c8d7df0c Dec 16 12:58:40.015550 containerd[1627]: 2025-12-16 12:58:39.952 [INFO][5155] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.21.64/26 handle="k8s-pod-network.df8d279c80396a3a78ba5e6bd66693c70c75b1c006f9ba8b23c9ed44c8d7df0c" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:40.015550 containerd[1627]: 2025-12-16 12:58:39.964 [INFO][5155] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.21.72/26] block=192.168.21.64/26 handle="k8s-pod-network.df8d279c80396a3a78ba5e6bd66693c70c75b1c006f9ba8b23c9ed44c8d7df0c" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:40.015550 containerd[1627]: 2025-12-16 12:58:39.965 [INFO][5155] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.72/26] handle="k8s-pod-network.df8d279c80396a3a78ba5e6bd66693c70c75b1c006f9ba8b23c9ed44c8d7df0c" host="srv-gtzk5.gb1.brightbox.com" Dec 16 12:58:40.015550 containerd[1627]: 2025-12-16 12:58:39.965 [INFO][5155] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:58:40.015550 containerd[1627]: 2025-12-16 12:58:39.965 [INFO][5155] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.21.72/26] IPv6=[] ContainerID="df8d279c80396a3a78ba5e6bd66693c70c75b1c006f9ba8b23c9ed44c8d7df0c" HandleID="k8s-pod-network.df8d279c80396a3a78ba5e6bd66693c70c75b1c006f9ba8b23c9ed44c8d7df0c" Workload="srv--gtzk5.gb1.brightbox.com-k8s-calico--apiserver--6c6f4459b6--4w99m-eth0" Dec 16 12:58:40.019821 containerd[1627]: 2025-12-16 12:58:39.970 [INFO][5144] cni-plugin/k8s.go 418: Populated endpoint ContainerID="df8d279c80396a3a78ba5e6bd66693c70c75b1c006f9ba8b23c9ed44c8d7df0c" Namespace="calico-apiserver" Pod="calico-apiserver-6c6f4459b6-4w99m" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-calico--apiserver--6c6f4459b6--4w99m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gtzk5.gb1.brightbox.com-k8s-calico--apiserver--6c6f4459b6--4w99m-eth0", GenerateName:"calico-apiserver-6c6f4459b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"f26c7895-de48-48b9-98b3-5ed0a263683c", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 57, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c6f4459b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gtzk5.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-6c6f4459b6-4w99m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali856c756fa32", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:58:40.019821 containerd[1627]: 2025-12-16 12:58:39.970 [INFO][5144] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.72/32] ContainerID="df8d279c80396a3a78ba5e6bd66693c70c75b1c006f9ba8b23c9ed44c8d7df0c" Namespace="calico-apiserver" Pod="calico-apiserver-6c6f4459b6-4w99m" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-calico--apiserver--6c6f4459b6--4w99m-eth0" Dec 16 12:58:40.019821 containerd[1627]: 2025-12-16 12:58:39.970 [INFO][5144] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali856c756fa32 ContainerID="df8d279c80396a3a78ba5e6bd66693c70c75b1c006f9ba8b23c9ed44c8d7df0c" Namespace="calico-apiserver" Pod="calico-apiserver-6c6f4459b6-4w99m" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-calico--apiserver--6c6f4459b6--4w99m-eth0" Dec 16 12:58:40.019821 containerd[1627]: 2025-12-16 12:58:39.977 [INFO][5144] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df8d279c80396a3a78ba5e6bd66693c70c75b1c006f9ba8b23c9ed44c8d7df0c" Namespace="calico-apiserver" Pod="calico-apiserver-6c6f4459b6-4w99m" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-calico--apiserver--6c6f4459b6--4w99m-eth0" Dec 16 12:58:40.019821 containerd[1627]: 2025-12-16 12:58:39.978 [INFO][5144] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="df8d279c80396a3a78ba5e6bd66693c70c75b1c006f9ba8b23c9ed44c8d7df0c" Namespace="calico-apiserver" Pod="calico-apiserver-6c6f4459b6-4w99m" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-calico--apiserver--6c6f4459b6--4w99m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gtzk5.gb1.brightbox.com-k8s-calico--apiserver--6c6f4459b6--4w99m-eth0", GenerateName:"calico-apiserver-6c6f4459b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"f26c7895-de48-48b9-98b3-5ed0a263683c", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 57, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c6f4459b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gtzk5.gb1.brightbox.com", ContainerID:"df8d279c80396a3a78ba5e6bd66693c70c75b1c006f9ba8b23c9ed44c8d7df0c", Pod:"calico-apiserver-6c6f4459b6-4w99m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali856c756fa32", MAC:"46:62:0a:e6:b6:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:58:40.019821 containerd[1627]: 2025-12-16 12:58:40.005 [INFO][5144] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="df8d279c80396a3a78ba5e6bd66693c70c75b1c006f9ba8b23c9ed44c8d7df0c" Namespace="calico-apiserver" Pod="calico-apiserver-6c6f4459b6-4w99m" WorkloadEndpoint="srv--gtzk5.gb1.brightbox.com-k8s-calico--apiserver--6c6f4459b6--4w99m-eth0" Dec 16 12:58:40.057693 kernel: kauditd_printk_skb: 189 callbacks suppressed Dec 16 12:58:40.057911 kernel: audit: type=1325 audit(1765889920.049:737): table=filter:142 family=2 entries=53 op=nft_register_chain pid=5170 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:58:40.049000 audit[5170]: NETFILTER_CFG table=filter:142 family=2 entries=53 op=nft_register_chain pid=5170 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:58:40.049000 audit[5170]: SYSCALL arch=c000003e syscall=46 success=yes exit=26624 a0=3 a1=7ffe3f6fbaa0 a2=0 a3=7ffe3f6fba8c items=0 ppid=4456 pid=5170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:40.074015 kernel: audit: type=1300 audit(1765889920.049:737): arch=c000003e syscall=46 success=yes exit=26624 a0=3 a1=7ffe3f6fbaa0 a2=0 a3=7ffe3f6fba8c items=0 ppid=4456 pid=5170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:40.049000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:58:40.085068 kernel: audit: type=1327 audit(1765889920.049:737): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:58:40.089040 containerd[1627]: time="2025-12-16T12:58:40.088942038Z" level=info msg="connecting to shim df8d279c80396a3a78ba5e6bd66693c70c75b1c006f9ba8b23c9ed44c8d7df0c" address="unix:///run/containerd/s/f1eaf985c1bdef0f7951110a34990a64d0b11458af98ef8bab5761c68ba96b3e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:58:40.146380 systemd[1]: Started cri-containerd-df8d279c80396a3a78ba5e6bd66693c70c75b1c006f9ba8b23c9ed44c8d7df0c.scope - libcontainer container df8d279c80396a3a78ba5e6bd66693c70c75b1c006f9ba8b23c9ed44c8d7df0c. Dec 16 12:58:40.165000 audit: BPF prog-id=255 op=LOAD Dec 16 12:58:40.168982 kernel: audit: type=1334 audit(1765889920.165:738): prog-id=255 op=LOAD Dec 16 12:58:40.167000 audit: BPF prog-id=256 op=LOAD Dec 16 12:58:40.170993 kernel: audit: type=1334 audit(1765889920.167:739): prog-id=256 op=LOAD Dec 16 12:58:40.171088 kernel: audit: type=1300 audit(1765889920.167:739): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=5179 pid=5190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:40.167000 audit[5190]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=5179 pid=5190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:40.167000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466386432373963383033393661336137386261356536626436363639 Dec 16 12:58:40.178202 kernel: audit: type=1327 audit(1765889920.167:739): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466386432373963383033393661336137386261356536626436363639 Dec 16 12:58:40.168000 audit: BPF prog-id=256 op=UNLOAD Dec 16 12:58:40.182321 kernel: audit: type=1334 audit(1765889920.168:740): prog-id=256 op=UNLOAD Dec 16 12:58:40.168000 audit[5190]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5179 pid=5190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:40.185074 kernel: audit: type=1300 audit(1765889920.168:740): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5179 pid=5190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:40.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466386432373963383033393661336137386261356536626436363639 Dec 16 12:58:40.190398 kernel: audit: type=1327 audit(1765889920.168:740): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466386432373963383033393661336137386261356536626436363639 Dec 16 12:58:40.168000 audit: BPF prog-id=257 op=LOAD Dec 16 12:58:40.168000 audit[5190]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=5179 pid=5190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:40.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466386432373963383033393661336137386261356536626436363639 Dec 16 12:58:40.168000 audit: BPF prog-id=258 op=LOAD Dec 16 12:58:40.168000 audit[5190]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=5179 pid=5190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:40.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466386432373963383033393661336137386261356536626436363639 Dec 16 12:58:40.168000 audit: BPF prog-id=258 op=UNLOAD Dec 16 12:58:40.168000 audit[5190]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5179 pid=5190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:40.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466386432373963383033393661336137386261356536626436363639 Dec 16 12:58:40.168000 audit: BPF prog-id=257 op=UNLOAD Dec 16 12:58:40.168000 audit[5190]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5179 pid=5190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:40.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466386432373963383033393661336137386261356536626436363639 Dec 16 12:58:40.168000 audit: BPF prog-id=259 op=LOAD Dec 16 12:58:40.168000 audit[5190]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=5179 pid=5190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:40.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466386432373963383033393661336137386261356536626436363639 Dec 16 12:58:40.242947 containerd[1627]: time="2025-12-16T12:58:40.242803290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6f4459b6-4w99m,Uid:f26c7895-de48-48b9-98b3-5ed0a263683c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"df8d279c80396a3a78ba5e6bd66693c70c75b1c006f9ba8b23c9ed44c8d7df0c\"" Dec 16 12:58:40.247451 containerd[1627]: time="2025-12-16T12:58:40.247342143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:58:40.576854 containerd[1627]: time="2025-12-16T12:58:40.575422536Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:58:40.576854 containerd[1627]: time="2025-12-16T12:58:40.576682503Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:58:40.576854 containerd[1627]: time="2025-12-16T12:58:40.576787067Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:58:40.577188 kubelet[2968]: E1216 12:58:40.576929 2968 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:58:40.577188 kubelet[2968]: E1216 12:58:40.577025 2968 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:58:40.577671 kubelet[2968]: E1216 12:58:40.577204 2968 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nzmwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c6f4459b6-4w99m_calico-apiserver(f26c7895-de48-48b9-98b3-5ed0a263683c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:58:40.581095 kubelet[2968]: E1216 12:58:40.578413 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6f4459b6-4w99m" podUID="f26c7895-de48-48b9-98b3-5ed0a263683c" Dec 16 12:58:40.684657 systemd-networkd[1550]: cali3c0322be215: Gained IPv6LL Dec 16 12:58:41.132224 systemd-networkd[1550]: cali856c756fa32: Gained IPv6LL Dec 16 12:58:41.546223 kubelet[2968]: E1216 12:58:41.545720 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6f4459b6-4w99m" podUID="f26c7895-de48-48b9-98b3-5ed0a263683c" Dec 16 12:58:41.592000 audit[5218]: NETFILTER_CFG table=filter:143 family=2 entries=14 op=nft_register_rule pid=5218 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:41.592000 audit[5218]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc3ae52570 a2=0 a3=7ffc3ae5255c items=0 ppid=3129 pid=5218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:41.592000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:41.599000 audit[5218]: NETFILTER_CFG table=nat:144 family=2 entries=20 op=nft_register_rule pid=5218 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:41.599000 audit[5218]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc3ae52570 a2=0 a3=7ffc3ae5255c items=0 ppid=3129 pid=5218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:41.599000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:41.736978 containerd[1627]: time="2025-12-16T12:58:41.735502208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:58:42.050552 containerd[1627]: time="2025-12-16T12:58:42.050220661Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:58:42.052134 containerd[1627]: time="2025-12-16T12:58:42.052024823Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:58:42.052466 containerd[1627]: time="2025-12-16T12:58:42.052081844Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:58:42.053163 kubelet[2968]: E1216 12:58:42.053088 2968 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:58:42.054535 kubelet[2968]: E1216 12:58:42.053761 2968 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:58:42.055265 kubelet[2968]: E1216 12:58:42.055195 2968 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3b832fcc79f24e90b591efebd74820fd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lcxrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7bfdd67657-r2jq9_calico-system(fc47de84-c877-49ec-9b26-5c23204d879d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:58:42.058488 containerd[1627]: time="2025-12-16T12:58:42.058345450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:58:42.375845 containerd[1627]: time="2025-12-16T12:58:42.375768680Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:58:42.377233 containerd[1627]: time="2025-12-16T12:58:42.377070846Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:58:42.377406 containerd[1627]: time="2025-12-16T12:58:42.377115311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:58:42.378011 kubelet[2968]: E1216 12:58:42.377624 2968 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:58:42.378011 kubelet[2968]: E1216 12:58:42.377678 2968 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:58:42.378011 kubelet[2968]: E1216 12:58:42.377855 2968 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lcxrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7bfdd67657-r2jq9_calico-system(fc47de84-c877-49ec-9b26-5c23204d879d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:58:42.379107 kubelet[2968]: E1216 12:58:42.379066 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7bfdd67657-r2jq9" podUID="fc47de84-c877-49ec-9b26-5c23204d879d" Dec 16 12:58:43.733859 containerd[1627]: time="2025-12-16T12:58:43.733714692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:58:44.053134 containerd[1627]: time="2025-12-16T12:58:44.052836044Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:58:44.060258 containerd[1627]: time="2025-12-16T12:58:44.060133871Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:58:44.060690 containerd[1627]: time="2025-12-16T12:58:44.060219797Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:58:44.060759 kubelet[2968]: E1216 12:58:44.060687 2968 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:58:44.061385 kubelet[2968]: E1216 12:58:44.060785 2968 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:58:44.061385 kubelet[2968]: E1216 12:58:44.061071 2968 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jrd7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-frm2n_calico-system(ae2d98cb-e462-4622-a2ef-d1063c3df86a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:58:44.067014 containerd[1627]: time="2025-12-16T12:58:44.066836982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:58:44.384449 containerd[1627]: time="2025-12-16T12:58:44.384351002Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:58:44.385776 containerd[1627]: time="2025-12-16T12:58:44.385725005Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:58:44.385935 containerd[1627]: time="2025-12-16T12:58:44.385905389Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:58:44.386326 kubelet[2968]: E1216 12:58:44.386227 2968 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:58:44.386326 kubelet[2968]: E1216 12:58:44.386311 2968 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:58:44.386627 kubelet[2968]: E1216 12:58:44.386549 2968 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jrd7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-frm2n_calico-system(ae2d98cb-e462-4622-a2ef-d1063c3df86a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:58:44.388088 kubelet[2968]: E1216 12:58:44.388041 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-frm2n" podUID="ae2d98cb-e462-4622-a2ef-d1063c3df86a" Dec 16 12:58:47.732843 containerd[1627]: time="2025-12-16T12:58:47.732279678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:58:48.049170 containerd[1627]: time="2025-12-16T12:58:48.048639594Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:58:48.050530 containerd[1627]: time="2025-12-16T12:58:48.050229139Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:58:48.050530 containerd[1627]: time="2025-12-16T12:58:48.050445743Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:58:48.050935 kubelet[2968]: E1216 12:58:48.050828 2968 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:58:48.052006 kubelet[2968]: E1216 12:58:48.050939 2968 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:58:48.052006 kubelet[2968]: E1216 12:58:48.051225 2968 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9zzkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7fb664895f-z7td5_calico-system(654be68d-8474-4290-8738-6c95ee33b1c3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:58:48.053262 kubelet[2968]: E1216 12:58:48.053029 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fb664895f-z7td5" podUID="654be68d-8474-4290-8738-6c95ee33b1c3" Dec 16 12:58:50.733240 containerd[1627]: time="2025-12-16T12:58:50.731710708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:58:51.053499 containerd[1627]: time="2025-12-16T12:58:51.053176873Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:58:51.054871 containerd[1627]: time="2025-12-16T12:58:51.054823714Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:58:51.055027 containerd[1627]: time="2025-12-16T12:58:51.055001862Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:58:51.055441 kubelet[2968]: E1216 12:58:51.055364 2968 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:58:51.056040 kubelet[2968]: E1216 12:58:51.055469 2968 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:58:51.056040 kubelet[2968]: E1216 12:58:51.055741 2968 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqwlc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c6f4459b6-wgcgz_calico-apiserver(612befca-b93d-4468-b0c5-1d17cad065aa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:58:51.057513 kubelet[2968]: E1216 12:58:51.057440 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6f4459b6-wgcgz" podUID="612befca-b93d-4468-b0c5-1d17cad065aa" Dec 16 12:58:52.733941 containerd[1627]: time="2025-12-16T12:58:52.733810446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:58:53.055693 containerd[1627]: time="2025-12-16T12:58:53.055115065Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:58:53.063811 containerd[1627]: time="2025-12-16T12:58:53.059747193Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:58:53.063811 containerd[1627]: time="2025-12-16T12:58:53.059802602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:58:53.064464 kubelet[2968]: E1216 12:58:53.060661 2968 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:58:53.064464 kubelet[2968]: E1216 12:58:53.060729 2968 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:58:53.064464 kubelet[2968]: E1216 12:58:53.060926 2968 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8d696,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-hjfvx_calico-system(5f28fd1d-daa3-4b1a-9808-93af3076e192): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:58:53.067490 kubelet[2968]: E1216 12:58:53.066560 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hjfvx" podUID="5f28fd1d-daa3-4b1a-9808-93af3076e192" Dec 16 12:58:54.734034 containerd[1627]: time="2025-12-16T12:58:54.733819050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:58:55.048345 containerd[1627]: time="2025-12-16T12:58:55.048154470Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:58:55.049652 containerd[1627]: time="2025-12-16T12:58:55.049582088Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:58:55.049801 containerd[1627]: time="2025-12-16T12:58:55.049722732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:58:55.050038 kubelet[2968]: E1216 12:58:55.049980 2968 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:58:55.050491 kubelet[2968]: E1216 12:58:55.050046 2968 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:58:55.050491 kubelet[2968]: E1216 12:58:55.050224 2968 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nzmwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c6f4459b6-4w99m_calico-apiserver(f26c7895-de48-48b9-98b3-5ed0a263683c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:58:55.051830 kubelet[2968]: E1216 12:58:55.051764 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6f4459b6-4w99m" podUID="f26c7895-de48-48b9-98b3-5ed0a263683c" Dec 16 12:58:55.737507 kubelet[2968]: E1216 12:58:55.737358 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-frm2n" podUID="ae2d98cb-e462-4622-a2ef-d1063c3df86a" Dec 16 12:58:57.734616 kubelet[2968]: E1216 12:58:57.734485 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7bfdd67657-r2jq9" podUID="fc47de84-c877-49ec-9b26-5c23204d879d" Dec 16 12:59:01.731436 kubelet[2968]: E1216 12:59:01.731330 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fb664895f-z7td5" podUID="654be68d-8474-4290-8738-6c95ee33b1c3" Dec 16 12:59:02.733733 kubelet[2968]: E1216 12:59:02.733062 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6f4459b6-wgcgz" podUID="612befca-b93d-4468-b0c5-1d17cad065aa" Dec 16 12:59:03.460393 systemd[1]: Started sshd@9-10.244.27.222:22-139.178.68.195:37478.service - OpenSSH per-connection server daemon (139.178.68.195:37478). Dec 16 12:59:03.471869 kernel: kauditd_printk_skb: 21 callbacks suppressed Dec 16 12:59:03.472144 kernel: audit: type=1130 audit(1765889943.459:748): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.244.27.222:22-139.178.68.195:37478 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:03.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.244.27.222:22-139.178.68.195:37478 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:04.434000 audit[5260]: USER_ACCT pid=5260 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:04.445634 sshd[5260]: Accepted publickey for core from 139.178.68.195 port 37478 ssh2: RSA SHA256:nzW4nX+OoWczWkGdWpN6K+WV2VgcBQaSLng08cWkYS4 Dec 16 12:59:04.449135 kernel: audit: type=1101 audit(1765889944.434:749): pid=5260 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:04.446000 audit[5260]: CRED_ACQ pid=5260 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:04.455052 kernel: audit: type=1103 audit(1765889944.446:750): pid=5260 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:04.455479 sshd-session[5260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:04.460329 kernel: audit: type=1006 audit(1765889944.446:751): pid=5260 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 16 12:59:04.446000 audit[5260]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6a30a860 a2=3 a3=0 items=0 ppid=1 pid=5260 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:04.467723 kernel: audit: type=1300 audit(1765889944.446:751): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6a30a860 a2=3 a3=0 items=0 ppid=1 pid=5260 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:04.446000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:04.473090 kernel: audit: type=1327 audit(1765889944.446:751): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:04.482255 systemd-logind[1597]: New session 13 of user core. Dec 16 12:59:04.497693 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 12:59:04.505000 audit[5260]: USER_START pid=5260 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:04.513706 kernel: audit: type=1105 audit(1765889944.505:752): pid=5260 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:04.515000 audit[5264]: CRED_ACQ pid=5264 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:04.521986 kernel: audit: type=1103 audit(1765889944.515:753): pid=5264 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:05.677597 sshd[5264]: Connection closed by 139.178.68.195 port 37478 Dec 16 12:59:05.676541 sshd-session[5260]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:05.687000 audit[5260]: USER_END pid=5260 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:05.703466 kernel: audit: type=1106 audit(1765889945.687:754): pid=5260 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:05.707019 kernel: audit: type=1104 audit(1765889945.697:755): pid=5260 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:05.697000 audit[5260]: CRED_DISP pid=5260 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:05.709842 systemd[1]: sshd@9-10.244.27.222:22-139.178.68.195:37478.service: Deactivated successfully. Dec 16 12:59:05.716449 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 12:59:05.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.244.27.222:22-139.178.68.195:37478 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:05.727970 systemd-logind[1597]: Session 13 logged out. Waiting for processes to exit. Dec 16 12:59:05.731534 systemd-logind[1597]: Removed session 13. Dec 16 12:59:07.733770 kubelet[2968]: E1216 12:59:07.732997 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6f4459b6-4w99m" podUID="f26c7895-de48-48b9-98b3-5ed0a263683c" Dec 16 12:59:07.733770 kubelet[2968]: E1216 12:59:07.733702 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hjfvx" podUID="5f28fd1d-daa3-4b1a-9808-93af3076e192" Dec 16 12:59:09.735386 containerd[1627]: time="2025-12-16T12:59:09.734655837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:59:10.090589 containerd[1627]: time="2025-12-16T12:59:10.090306502Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:10.092182 containerd[1627]: time="2025-12-16T12:59:10.091843105Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:59:10.092411 containerd[1627]: time="2025-12-16T12:59:10.091857286Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:59:10.092863 kubelet[2968]: E1216 12:59:10.092760 2968 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:59:10.094034 kubelet[2968]: E1216 12:59:10.092883 2968 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:59:10.094755 kubelet[2968]: E1216 12:59:10.094694 2968 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jrd7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-frm2n_calico-system(ae2d98cb-e462-4622-a2ef-d1063c3df86a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:10.097023 containerd[1627]: time="2025-12-16T12:59:10.096982412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:59:10.437224 containerd[1627]: time="2025-12-16T12:59:10.435398050Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:10.438809 containerd[1627]: time="2025-12-16T12:59:10.438611429Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:59:10.438809 containerd[1627]: time="2025-12-16T12:59:10.438649553Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:59:10.439566 kubelet[2968]: E1216 12:59:10.439502 2968 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:59:10.441979 kubelet[2968]: E1216 12:59:10.439740 2968 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:59:10.442368 kubelet[2968]: E1216 12:59:10.442295 2968 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jrd7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-frm2n_calico-system(ae2d98cb-e462-4622-a2ef-d1063c3df86a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:10.443883 kubelet[2968]: E1216 12:59:10.443759 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-frm2n" podUID="ae2d98cb-e462-4622-a2ef-d1063c3df86a" Dec 16 12:59:10.860343 systemd[1]: Started sshd@10-10.244.27.222:22-139.178.68.195:60806.service - OpenSSH per-connection server daemon (139.178.68.195:60806). Dec 16 12:59:10.873430 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:59:10.873625 kernel: audit: type=1130 audit(1765889950.860:757): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.244.27.222:22-139.178.68.195:60806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:10.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.244.27.222:22-139.178.68.195:60806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:11.768000 audit[5290]: USER_ACCT pid=5290 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:11.780938 kernel: audit: type=1101 audit(1765889951.768:758): pid=5290 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:11.781591 kernel: audit: type=1103 audit(1765889951.776:759): pid=5290 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:11.776000 audit[5290]: CRED_ACQ pid=5290 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:11.779162 sshd-session[5290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:11.783470 sshd[5290]: Accepted publickey for core from 139.178.68.195 port 60806 ssh2: RSA SHA256:nzW4nX+OoWczWkGdWpN6K+WV2VgcBQaSLng08cWkYS4 Dec 16 12:59:11.786975 kernel: audit: type=1006 audit(1765889951.776:760): pid=5290 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 16 12:59:11.776000 audit[5290]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd385bca90 a2=3 a3=0 items=0 ppid=1 pid=5290 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:11.795439 kernel: audit: type=1300 audit(1765889951.776:760): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd385bca90 a2=3 a3=0 items=0 ppid=1 pid=5290 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:11.776000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:11.799340 kernel: audit: type=1327 audit(1765889951.776:760): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:11.803103 systemd-logind[1597]: New session 14 of user core. Dec 16 12:59:11.811207 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 12:59:11.818000 audit[5290]: USER_START pid=5290 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:11.825991 kernel: audit: type=1105 audit(1765889951.818:761): pid=5290 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:11.827000 audit[5295]: CRED_ACQ pid=5295 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:11.835984 kernel: audit: type=1103 audit(1765889951.827:762): pid=5295 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:12.423989 sshd[5295]: Connection closed by 139.178.68.195 port 60806 Dec 16 12:59:12.424849 sshd-session[5290]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:12.428000 audit[5290]: USER_END pid=5290 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:12.438588 systemd-logind[1597]: Session 14 logged out. Waiting for processes to exit. Dec 16 12:59:12.440422 kernel: audit: type=1106 audit(1765889952.428:763): pid=5290 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:12.442387 systemd[1]: sshd@10-10.244.27.222:22-139.178.68.195:60806.service: Deactivated successfully. Dec 16 12:59:12.428000 audit[5290]: CRED_DISP pid=5290 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:12.447990 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 12:59:12.450480 kernel: audit: type=1104 audit(1765889952.428:764): pid=5290 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:12.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.244.27.222:22-139.178.68.195:60806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:12.453192 systemd-logind[1597]: Removed session 14. Dec 16 12:59:12.737662 containerd[1627]: time="2025-12-16T12:59:12.736868096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:59:13.052464 containerd[1627]: time="2025-12-16T12:59:13.051901071Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:13.053262 containerd[1627]: time="2025-12-16T12:59:13.053161016Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:59:13.053262 containerd[1627]: time="2025-12-16T12:59:13.053219278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:59:13.053584 kubelet[2968]: E1216 12:59:13.053502 2968 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:59:13.054767 kubelet[2968]: E1216 12:59:13.053623 2968 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:59:13.054767 kubelet[2968]: E1216 12:59:13.053874 2968 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3b832fcc79f24e90b591efebd74820fd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lcxrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7bfdd67657-r2jq9_calico-system(fc47de84-c877-49ec-9b26-5c23204d879d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:13.056828 containerd[1627]: time="2025-12-16T12:59:13.056792394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:59:13.384997 containerd[1627]: time="2025-12-16T12:59:13.384873868Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:13.386619 containerd[1627]: time="2025-12-16T12:59:13.386565970Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:59:13.386739 containerd[1627]: time="2025-12-16T12:59:13.386624333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:59:13.387099 kubelet[2968]: E1216 12:59:13.387029 2968 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:59:13.387179 kubelet[2968]: E1216 12:59:13.387105 2968 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:59:13.387984 kubelet[2968]: E1216 12:59:13.387286 2968 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lcxrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7bfdd67657-r2jq9_calico-system(fc47de84-c877-49ec-9b26-5c23204d879d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:13.388690 kubelet[2968]: E1216 12:59:13.388639 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7bfdd67657-r2jq9" podUID="fc47de84-c877-49ec-9b26-5c23204d879d" Dec 16 12:59:13.731700 containerd[1627]: time="2025-12-16T12:59:13.731554469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:59:14.054995 containerd[1627]: time="2025-12-16T12:59:14.054425185Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:14.056288 containerd[1627]: time="2025-12-16T12:59:14.055547690Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:59:14.056288 containerd[1627]: time="2025-12-16T12:59:14.055989665Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:59:14.056422 kubelet[2968]: E1216 12:59:14.056217 2968 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:59:14.056422 kubelet[2968]: E1216 12:59:14.056278 2968 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:59:14.057259 kubelet[2968]: E1216 12:59:14.056497 2968 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9zzkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7fb664895f-z7td5_calico-system(654be68d-8474-4290-8738-6c95ee33b1c3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:14.058227 kubelet[2968]: E1216 12:59:14.058177 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fb664895f-z7td5" podUID="654be68d-8474-4290-8738-6c95ee33b1c3" Dec 16 12:59:16.730974 containerd[1627]: time="2025-12-16T12:59:16.730781539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:59:17.085066 containerd[1627]: time="2025-12-16T12:59:17.085007475Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:17.086399 containerd[1627]: time="2025-12-16T12:59:17.086336308Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:59:17.086619 containerd[1627]: time="2025-12-16T12:59:17.086439058Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:59:17.086912 kubelet[2968]: E1216 12:59:17.086850 2968 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:59:17.087833 kubelet[2968]: E1216 12:59:17.087479 2968 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:59:17.087833 kubelet[2968]: E1216 12:59:17.087737 2968 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqwlc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c6f4459b6-wgcgz_calico-apiserver(612befca-b93d-4468-b0c5-1d17cad065aa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:17.091534 kubelet[2968]: E1216 12:59:17.091461 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6f4459b6-wgcgz" podUID="612befca-b93d-4468-b0c5-1d17cad065aa" Dec 16 12:59:17.605719 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:59:17.606017 kernel: audit: type=1130 audit(1765889957.598:766): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.244.27.222:22-139.178.68.195:60822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:17.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.244.27.222:22-139.178.68.195:60822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:17.599543 systemd[1]: Started sshd@11-10.244.27.222:22-139.178.68.195:60822.service - OpenSSH per-connection server daemon (139.178.68.195:60822). Dec 16 12:59:18.486000 audit[5308]: USER_ACCT pid=5308 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:18.494428 kernel: audit: type=1101 audit(1765889958.486:767): pid=5308 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:18.494596 sshd[5308]: Accepted publickey for core from 139.178.68.195 port 60822 ssh2: RSA SHA256:nzW4nX+OoWczWkGdWpN6K+WV2VgcBQaSLng08cWkYS4 Dec 16 12:59:18.497660 sshd-session[5308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:18.495000 audit[5308]: CRED_ACQ pid=5308 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:18.503045 kernel: audit: type=1103 audit(1765889958.495:768): pid=5308 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:18.512194 kernel: audit: type=1006 audit(1765889958.495:769): pid=5308 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 16 12:59:18.495000 audit[5308]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd73ee350 a2=3 a3=0 items=0 ppid=1 pid=5308 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:18.495000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:18.521657 kernel: audit: type=1300 audit(1765889958.495:769): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd73ee350 a2=3 a3=0 items=0 ppid=1 pid=5308 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:18.521768 kernel: audit: type=1327 audit(1765889958.495:769): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:18.518702 systemd-logind[1597]: New session 15 of user core. Dec 16 12:59:18.526361 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 12:59:18.532000 audit[5308]: USER_START pid=5308 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:18.540450 kernel: audit: type=1105 audit(1765889958.532:770): pid=5308 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:18.540000 audit[5312]: CRED_ACQ pid=5312 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:18.546537 kernel: audit: type=1103 audit(1765889958.540:771): pid=5312 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:19.119089 sshd[5312]: Connection closed by 139.178.68.195 port 60822 Dec 16 12:59:19.120225 sshd-session[5308]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:19.123000 audit[5308]: USER_END pid=5308 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:19.132996 kernel: audit: type=1106 audit(1765889959.123:772): pid=5308 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:19.133789 systemd[1]: sshd@11-10.244.27.222:22-139.178.68.195:60822.service: Deactivated successfully. Dec 16 12:59:19.137614 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 12:59:19.123000 audit[5308]: CRED_DISP pid=5308 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:19.146991 kernel: audit: type=1104 audit(1765889959.123:773): pid=5308 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:19.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.244.27.222:22-139.178.68.195:60822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:19.148287 systemd-logind[1597]: Session 15 logged out. Waiting for processes to exit. Dec 16 12:59:19.152410 systemd-logind[1597]: Removed session 15. Dec 16 12:59:19.740616 containerd[1627]: time="2025-12-16T12:59:19.740406620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:59:20.053578 containerd[1627]: time="2025-12-16T12:59:20.053075709Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:20.054604 containerd[1627]: time="2025-12-16T12:59:20.054476169Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:59:20.054604 containerd[1627]: time="2025-12-16T12:59:20.054530013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:59:20.056245 kubelet[2968]: E1216 12:59:20.056133 2968 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:59:20.057416 kubelet[2968]: E1216 12:59:20.056750 2968 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:59:20.057714 kubelet[2968]: E1216 12:59:20.057641 2968 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nzmwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c6f4459b6-4w99m_calico-apiserver(f26c7895-de48-48b9-98b3-5ed0a263683c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:20.059071 kubelet[2968]: E1216 12:59:20.059017 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6f4459b6-4w99m" podUID="f26c7895-de48-48b9-98b3-5ed0a263683c" Dec 16 12:59:20.732535 containerd[1627]: time="2025-12-16T12:59:20.732423022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:59:21.067040 containerd[1627]: time="2025-12-16T12:59:21.066380366Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:21.068009 containerd[1627]: time="2025-12-16T12:59:21.067887866Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:59:21.068009 containerd[1627]: time="2025-12-16T12:59:21.067992750Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:59:21.068403 kubelet[2968]: E1216 12:59:21.068305 2968 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:59:21.069162 kubelet[2968]: E1216 12:59:21.068410 2968 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:59:21.069162 kubelet[2968]: E1216 12:59:21.068666 2968 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8d696,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-hjfvx_calico-system(5f28fd1d-daa3-4b1a-9808-93af3076e192): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:21.070782 kubelet[2968]: E1216 12:59:21.069781 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hjfvx" podUID="5f28fd1d-daa3-4b1a-9808-93af3076e192" Dec 16 12:59:23.734279 kubelet[2968]: E1216 12:59:23.733859 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7bfdd67657-r2jq9" podUID="fc47de84-c877-49ec-9b26-5c23204d879d" Dec 16 12:59:24.282656 systemd[1]: Started sshd@12-10.244.27.222:22-139.178.68.195:43644.service - OpenSSH per-connection server daemon (139.178.68.195:43644). Dec 16 12:59:24.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.244.27.222:22-139.178.68.195:43644 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:24.292202 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:59:24.292299 kernel: audit: type=1130 audit(1765889964.280:775): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.244.27.222:22-139.178.68.195:43644 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:25.146000 audit[5327]: USER_ACCT pid=5327 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:25.159868 kernel: audit: type=1101 audit(1765889965.146:776): pid=5327 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:25.160100 sshd[5327]: Accepted publickey for core from 139.178.68.195 port 43644 ssh2: RSA SHA256:nzW4nX+OoWczWkGdWpN6K+WV2VgcBQaSLng08cWkYS4 Dec 16 12:59:25.160000 audit[5327]: CRED_ACQ pid=5327 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:25.167981 kernel: audit: type=1103 audit(1765889965.160:777): pid=5327 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:25.169608 sshd-session[5327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:25.174981 kernel: audit: type=1006 audit(1765889965.166:778): pid=5327 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 16 12:59:25.166000 audit[5327]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcbf094fd0 a2=3 a3=0 items=0 ppid=1 pid=5327 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:25.166000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:25.184297 kernel: audit: type=1300 audit(1765889965.166:778): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcbf094fd0 a2=3 a3=0 items=0 ppid=1 pid=5327 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:25.184388 kernel: audit: type=1327 audit(1765889965.166:778): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:25.193643 systemd-logind[1597]: New session 16 of user core. Dec 16 12:59:25.205351 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 12:59:25.210000 audit[5327]: USER_START pid=5327 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:25.224986 kernel: audit: type=1105 audit(1765889965.210:779): pid=5327 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:25.230000 audit[5331]: CRED_ACQ pid=5331 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:25.237998 kernel: audit: type=1103 audit(1765889965.230:780): pid=5331 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:25.737988 sshd[5331]: Connection closed by 139.178.68.195 port 43644 Dec 16 12:59:25.737231 sshd-session[5327]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:25.739844 kubelet[2968]: E1216 12:59:25.739591 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-frm2n" podUID="ae2d98cb-e462-4622-a2ef-d1063c3df86a" Dec 16 12:59:25.744000 audit[5327]: USER_END pid=5327 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:25.759115 kernel: audit: type=1106 audit(1765889965.744:781): pid=5327 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:25.744000 audit[5327]: CRED_DISP pid=5327 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:25.764697 systemd[1]: sshd@12-10.244.27.222:22-139.178.68.195:43644.service: Deactivated successfully. Dec 16 12:59:25.769286 kernel: audit: type=1104 audit(1765889965.744:782): pid=5327 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:25.769456 systemd-logind[1597]: Session 16 logged out. Waiting for processes to exit. Dec 16 12:59:25.771673 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 12:59:25.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.244.27.222:22-139.178.68.195:43644 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:25.780334 systemd-logind[1597]: Removed session 16. Dec 16 12:59:25.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.244.27.222:22-139.178.68.195:43656 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:25.929998 systemd[1]: Started sshd@13-10.244.27.222:22-139.178.68.195:43656.service - OpenSSH per-connection server daemon (139.178.68.195:43656). Dec 16 12:59:26.801000 audit[5343]: USER_ACCT pid=5343 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:26.804269 sshd[5343]: Accepted publickey for core from 139.178.68.195 port 43656 ssh2: RSA SHA256:nzW4nX+OoWczWkGdWpN6K+WV2VgcBQaSLng08cWkYS4 Dec 16 12:59:26.803000 audit[5343]: CRED_ACQ pid=5343 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:26.803000 audit[5343]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc28bb75b0 a2=3 a3=0 items=0 ppid=1 pid=5343 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:26.803000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:26.806000 sshd-session[5343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:26.818532 systemd-logind[1597]: New session 17 of user core. Dec 16 12:59:26.826256 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 12:59:26.832000 audit[5343]: USER_START pid=5343 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:26.835000 audit[5347]: CRED_ACQ pid=5347 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:27.518556 sshd[5347]: Connection closed by 139.178.68.195 port 43656 Dec 16 12:59:27.521269 sshd-session[5343]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:27.524000 audit[5343]: USER_END pid=5343 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:27.524000 audit[5343]: CRED_DISP pid=5343 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:27.530071 systemd[1]: sshd@13-10.244.27.222:22-139.178.68.195:43656.service: Deactivated successfully. Dec 16 12:59:27.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.244.27.222:22-139.178.68.195:43656 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.536250 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 12:59:27.540165 systemd-logind[1597]: Session 17 logged out. Waiting for processes to exit. Dec 16 12:59:27.544661 systemd-logind[1597]: Removed session 17. Dec 16 12:59:27.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.244.27.222:22-139.178.68.195:43662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.683795 systemd[1]: Started sshd@14-10.244.27.222:22-139.178.68.195:43662.service - OpenSSH per-connection server daemon (139.178.68.195:43662). Dec 16 12:59:28.622000 audit[5374]: USER_ACCT pid=5374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:28.623990 sshd[5374]: Accepted publickey for core from 139.178.68.195 port 43662 ssh2: RSA SHA256:nzW4nX+OoWczWkGdWpN6K+WV2VgcBQaSLng08cWkYS4 Dec 16 12:59:28.623000 audit[5374]: CRED_ACQ pid=5374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:28.624000 audit[5374]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe9ce2f340 a2=3 a3=0 items=0 ppid=1 pid=5374 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:28.624000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:28.626450 sshd-session[5374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:28.634195 systemd-logind[1597]: New session 18 of user core. Dec 16 12:59:28.640268 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 12:59:28.645000 audit[5374]: USER_START pid=5374 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:28.647000 audit[5384]: CRED_ACQ pid=5384 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:28.731418 kubelet[2968]: E1216 12:59:28.731329 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6f4459b6-wgcgz" podUID="612befca-b93d-4468-b0c5-1d17cad065aa" Dec 16 12:59:28.732501 kubelet[2968]: E1216 12:59:28.731815 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fb664895f-z7td5" podUID="654be68d-8474-4290-8738-6c95ee33b1c3" Dec 16 12:59:29.247982 sshd[5384]: Connection closed by 139.178.68.195 port 43662 Dec 16 12:59:29.248002 sshd-session[5374]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:29.250000 audit[5374]: USER_END pid=5374 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:29.251000 audit[5374]: CRED_DISP pid=5374 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:29.256228 systemd[1]: sshd@14-10.244.27.222:22-139.178.68.195:43662.service: Deactivated successfully. Dec 16 12:59:29.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.244.27.222:22-139.178.68.195:43662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:29.262300 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 12:59:29.265258 systemd-logind[1597]: Session 18 logged out. Waiting for processes to exit. Dec 16 12:59:29.271217 systemd-logind[1597]: Removed session 18. Dec 16 12:59:31.733011 kubelet[2968]: E1216 12:59:31.732553 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hjfvx" podUID="5f28fd1d-daa3-4b1a-9808-93af3076e192" Dec 16 12:59:32.732846 kubelet[2968]: E1216 12:59:32.732606 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6f4459b6-4w99m" podUID="f26c7895-de48-48b9-98b3-5ed0a263683c" Dec 16 12:59:34.432899 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 12:59:34.433254 kernel: audit: type=1130 audit(1765889974.427:802): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.244.27.222:22-139.178.68.195:35678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:34.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.244.27.222:22-139.178.68.195:35678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:34.428603 systemd[1]: Started sshd@15-10.244.27.222:22-139.178.68.195:35678.service - OpenSSH per-connection server daemon (139.178.68.195:35678). Dec 16 12:59:34.736093 kubelet[2968]: E1216 12:59:34.735780 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7bfdd67657-r2jq9" podUID="fc47de84-c877-49ec-9b26-5c23204d879d" Dec 16 12:59:35.305000 audit[5405]: USER_ACCT pid=5405 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:35.316152 kernel: audit: type=1101 audit(1765889975.305:803): pid=5405 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:35.316260 sshd[5405]: Accepted publickey for core from 139.178.68.195 port 35678 ssh2: RSA SHA256:nzW4nX+OoWczWkGdWpN6K+WV2VgcBQaSLng08cWkYS4 Dec 16 12:59:35.313624 sshd-session[5405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:35.311000 audit[5405]: CRED_ACQ pid=5405 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:35.326028 kernel: audit: type=1103 audit(1765889975.311:804): pid=5405 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:35.331978 kernel: audit: type=1006 audit(1765889975.311:805): pid=5405 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Dec 16 12:59:35.334268 systemd-logind[1597]: New session 19 of user core. Dec 16 12:59:35.344158 kernel: audit: type=1300 audit(1765889975.311:805): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd52df8200 a2=3 a3=0 items=0 ppid=1 pid=5405 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:35.311000 audit[5405]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd52df8200 a2=3 a3=0 items=0 ppid=1 pid=5405 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:35.344857 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 12:59:35.353713 kernel: audit: type=1327 audit(1765889975.311:805): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:35.311000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:35.353000 audit[5405]: USER_START pid=5405 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:35.362126 kernel: audit: type=1105 audit(1765889975.353:806): pid=5405 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:35.361000 audit[5409]: CRED_ACQ pid=5409 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:35.367998 kernel: audit: type=1103 audit(1765889975.361:807): pid=5409 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:35.931355 sshd[5409]: Connection closed by 139.178.68.195 port 35678 Dec 16 12:59:35.931757 sshd-session[5405]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:35.934000 audit[5405]: USER_END pid=5405 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:35.945989 kernel: audit: type=1106 audit(1765889975.934:808): pid=5405 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:35.946870 systemd[1]: sshd@15-10.244.27.222:22-139.178.68.195:35678.service: Deactivated successfully. Dec 16 12:59:35.951840 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 12:59:35.958344 systemd-logind[1597]: Session 19 logged out. Waiting for processes to exit. Dec 16 12:59:35.938000 audit[5405]: CRED_DISP pid=5405 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:35.964996 kernel: audit: type=1104 audit(1765889975.938:809): pid=5405 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:35.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.244.27.222:22-139.178.68.195:35678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:35.968018 systemd-logind[1597]: Removed session 19. Dec 16 12:59:39.734000 kubelet[2968]: E1216 12:59:39.733606 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-frm2n" podUID="ae2d98cb-e462-4622-a2ef-d1063c3df86a" Dec 16 12:59:41.106918 systemd[1]: Started sshd@16-10.244.27.222:22-139.178.68.195:51446.service - OpenSSH per-connection server daemon (139.178.68.195:51446). Dec 16 12:59:41.114026 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:59:41.114168 kernel: audit: type=1130 audit(1765889981.106:811): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.244.27.222:22-139.178.68.195:51446 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:41.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.244.27.222:22-139.178.68.195:51446 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:41.997000 audit[5427]: USER_ACCT pid=5427 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:42.006020 kernel: audit: type=1101 audit(1765889981.997:812): pid=5427 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:42.007116 sshd[5427]: Accepted publickey for core from 139.178.68.195 port 51446 ssh2: RSA SHA256:nzW4nX+OoWczWkGdWpN6K+WV2VgcBQaSLng08cWkYS4 Dec 16 12:59:42.011217 sshd-session[5427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:42.008000 audit[5427]: CRED_ACQ pid=5427 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:42.020658 kernel: audit: type=1103 audit(1765889982.008:813): pid=5427 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:42.021005 kernel: audit: type=1006 audit(1765889982.008:814): pid=5427 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Dec 16 12:59:42.008000 audit[5427]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd109fe760 a2=3 a3=0 items=0 ppid=1 pid=5427 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:42.026986 kernel: audit: type=1300 audit(1765889982.008:814): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd109fe760 a2=3 a3=0 items=0 ppid=1 pid=5427 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:42.008000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:42.032507 kernel: audit: type=1327 audit(1765889982.008:814): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:42.033333 systemd-logind[1597]: New session 20 of user core. Dec 16 12:59:42.042304 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 12:59:42.048000 audit[5427]: USER_START pid=5427 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:42.056025 kernel: audit: type=1105 audit(1765889982.048:815): pid=5427 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:42.051000 audit[5431]: CRED_ACQ pid=5431 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:42.063034 kernel: audit: type=1103 audit(1765889982.051:816): pid=5431 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:42.663378 sshd[5431]: Connection closed by 139.178.68.195 port 51446 Dec 16 12:59:42.667240 sshd-session[5427]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:42.670000 audit[5427]: USER_END pid=5427 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:42.676302 systemd[1]: sshd@16-10.244.27.222:22-139.178.68.195:51446.service: Deactivated successfully. Dec 16 12:59:42.681877 kernel: audit: type=1106 audit(1765889982.670:817): pid=5427 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:42.681577 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 12:59:42.670000 audit[5427]: CRED_DISP pid=5427 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:42.690205 kernel: audit: type=1104 audit(1765889982.670:818): pid=5427 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:42.689014 systemd-logind[1597]: Session 20 logged out. Waiting for processes to exit. Dec 16 12:59:42.692883 systemd-logind[1597]: Removed session 20. Dec 16 12:59:42.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.244.27.222:22-139.178.68.195:51446 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:42.732202 kubelet[2968]: E1216 12:59:42.732111 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6f4459b6-wgcgz" podUID="612befca-b93d-4468-b0c5-1d17cad065aa" Dec 16 12:59:43.731991 kubelet[2968]: E1216 12:59:43.731895 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fb664895f-z7td5" podUID="654be68d-8474-4290-8738-6c95ee33b1c3" Dec 16 12:59:44.732235 kubelet[2968]: E1216 12:59:44.732120 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hjfvx" podUID="5f28fd1d-daa3-4b1a-9808-93af3076e192" Dec 16 12:59:45.739041 kubelet[2968]: E1216 12:59:45.738855 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6f4459b6-4w99m" podUID="f26c7895-de48-48b9-98b3-5ed0a263683c" Dec 16 12:59:47.847004 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:59:47.847210 kernel: audit: type=1130 audit(1765889987.840:820): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.244.27.222:22-139.178.68.195:51450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:47.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.244.27.222:22-139.178.68.195:51450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:47.841218 systemd[1]: Started sshd@17-10.244.27.222:22-139.178.68.195:51450.service - OpenSSH per-connection server daemon (139.178.68.195:51450). Dec 16 12:59:48.712000 audit[5443]: USER_ACCT pid=5443 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:48.722809 sshd[5443]: Accepted publickey for core from 139.178.68.195 port 51450 ssh2: RSA SHA256:nzW4nX+OoWczWkGdWpN6K+WV2VgcBQaSLng08cWkYS4 Dec 16 12:59:48.723465 kernel: audit: type=1101 audit(1765889988.712:821): pid=5443 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:48.730333 kernel: audit: type=1103 audit(1765889988.723:822): pid=5443 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:48.723000 audit[5443]: CRED_ACQ pid=5443 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:48.725788 sshd-session[5443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:48.734975 kernel: audit: type=1006 audit(1765889988.723:823): pid=5443 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Dec 16 12:59:48.740334 kubelet[2968]: E1216 12:59:48.740017 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7bfdd67657-r2jq9" podUID="fc47de84-c877-49ec-9b26-5c23204d879d" Dec 16 12:59:48.723000 audit[5443]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0c02e060 a2=3 a3=0 items=0 ppid=1 pid=5443 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:48.749048 kernel: audit: type=1300 audit(1765889988.723:823): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0c02e060 a2=3 a3=0 items=0 ppid=1 pid=5443 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:48.723000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:48.752975 kernel: audit: type=1327 audit(1765889988.723:823): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:48.756232 systemd-logind[1597]: New session 21 of user core. Dec 16 12:59:48.763327 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 12:59:48.779019 kernel: audit: type=1105 audit(1765889988.771:824): pid=5443 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:48.771000 audit[5443]: USER_START pid=5443 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:48.776000 audit[5447]: CRED_ACQ pid=5447 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:48.786989 kernel: audit: type=1103 audit(1765889988.776:825): pid=5447 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:49.380004 sshd[5447]: Connection closed by 139.178.68.195 port 51450 Dec 16 12:59:49.381242 sshd-session[5443]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:49.384000 audit[5443]: USER_END pid=5443 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:49.395998 kernel: audit: type=1106 audit(1765889989.384:826): pid=5443 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:49.403564 systemd[1]: sshd@17-10.244.27.222:22-139.178.68.195:51450.service: Deactivated successfully. Dec 16 12:59:49.411295 kernel: audit: type=1104 audit(1765889989.396:827): pid=5443 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:49.396000 audit[5443]: CRED_DISP pid=5443 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:49.408806 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 12:59:49.413087 systemd-logind[1597]: Session 21 logged out. Waiting for processes to exit. Dec 16 12:59:49.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.244.27.222:22-139.178.68.195:51450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:49.415518 systemd-logind[1597]: Removed session 21. Dec 16 12:59:50.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.244.27.222:22-139.178.68.195:51462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:50.603496 systemd[1]: Started sshd@18-10.244.27.222:22-139.178.68.195:51462.service - OpenSSH per-connection server daemon (139.178.68.195:51462). Dec 16 12:59:50.732690 containerd[1627]: time="2025-12-16T12:59:50.732567145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:59:51.067901 containerd[1627]: time="2025-12-16T12:59:51.067706086Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:51.070110 containerd[1627]: time="2025-12-16T12:59:51.070059141Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:59:51.070424 containerd[1627]: time="2025-12-16T12:59:51.070148680Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:59:51.071655 kubelet[2968]: E1216 12:59:51.071196 2968 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:59:51.072439 kubelet[2968]: E1216 12:59:51.071770 2968 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:59:51.073062 kubelet[2968]: E1216 12:59:51.072911 2968 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jrd7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-frm2n_calico-system(ae2d98cb-e462-4622-a2ef-d1063c3df86a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:51.078133 containerd[1627]: time="2025-12-16T12:59:51.077877917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:59:51.387245 containerd[1627]: time="2025-12-16T12:59:51.386699552Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:51.388976 containerd[1627]: time="2025-12-16T12:59:51.388501391Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:59:51.388976 containerd[1627]: time="2025-12-16T12:59:51.388632189Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:59:51.389141 kubelet[2968]: E1216 12:59:51.388976 2968 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:59:51.389407 kubelet[2968]: E1216 12:59:51.389336 2968 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:59:51.389833 kubelet[2968]: E1216 12:59:51.389765 2968 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jrd7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-frm2n_calico-system(ae2d98cb-e462-4622-a2ef-d1063c3df86a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:51.391465 kubelet[2968]: E1216 12:59:51.391405 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-frm2n" podUID="ae2d98cb-e462-4622-a2ef-d1063c3df86a" Dec 16 12:59:51.525000 audit[5465]: USER_ACCT pid=5465 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:51.527248 sshd[5465]: Accepted publickey for core from 139.178.68.195 port 51462 ssh2: RSA SHA256:nzW4nX+OoWczWkGdWpN6K+WV2VgcBQaSLng08cWkYS4 Dec 16 12:59:51.528000 audit[5465]: CRED_ACQ pid=5465 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:51.528000 audit[5465]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff5cae0340 a2=3 a3=0 items=0 ppid=1 pid=5465 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:51.528000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:51.530351 sshd-session[5465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:51.548054 systemd-logind[1597]: New session 22 of user core. Dec 16 12:59:51.554215 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 12:59:51.561000 audit[5465]: USER_START pid=5465 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:51.565000 audit[5469]: CRED_ACQ pid=5469 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:52.617136 sshd[5469]: Connection closed by 139.178.68.195 port 51462 Dec 16 12:59:52.629646 sshd-session[5465]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:52.635000 audit[5465]: USER_END pid=5465 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:52.637000 audit[5465]: CRED_DISP pid=5465 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:52.648696 systemd-logind[1597]: Session 22 logged out. Waiting for processes to exit. Dec 16 12:59:52.649896 systemd[1]: sshd@18-10.244.27.222:22-139.178.68.195:51462.service: Deactivated successfully. Dec 16 12:59:52.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.244.27.222:22-139.178.68.195:51462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:52.656189 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 12:59:52.661231 systemd-logind[1597]: Removed session 22. Dec 16 12:59:52.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.244.27.222:22-139.178.68.195:54622 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:52.780898 systemd[1]: Started sshd@19-10.244.27.222:22-139.178.68.195:54622.service - OpenSSH per-connection server daemon (139.178.68.195:54622). Dec 16 12:59:53.688002 kernel: kauditd_printk_skb: 13 callbacks suppressed Dec 16 12:59:53.688302 kernel: audit: type=1101 audit(1765889993.675:839): pid=5479 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:53.675000 audit[5479]: USER_ACCT pid=5479 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:53.689255 sshd[5479]: Accepted publickey for core from 139.178.68.195 port 54622 ssh2: RSA SHA256:nzW4nX+OoWczWkGdWpN6K+WV2VgcBQaSLng08cWkYS4 Dec 16 12:59:53.692697 sshd-session[5479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:53.689000 audit[5479]: CRED_ACQ pid=5479 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:53.700055 kernel: audit: type=1103 audit(1765889993.689:840): pid=5479 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:53.706984 kernel: audit: type=1006 audit(1765889993.689:841): pid=5479 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 16 12:59:53.689000 audit[5479]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe6e435290 a2=3 a3=0 items=0 ppid=1 pid=5479 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:53.714346 kernel: audit: type=1300 audit(1765889993.689:841): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe6e435290 a2=3 a3=0 items=0 ppid=1 pid=5479 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:53.718713 systemd-logind[1597]: New session 23 of user core. Dec 16 12:59:53.689000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:53.725780 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 12:59:53.726204 kernel: audit: type=1327 audit(1765889993.689:841): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:53.731000 audit[5479]: USER_START pid=5479 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:53.738981 kernel: audit: type=1105 audit(1765889993.731:842): pid=5479 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:53.741810 kubelet[2968]: E1216 12:59:53.740343 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6f4459b6-wgcgz" podUID="612befca-b93d-4468-b0c5-1d17cad065aa" Dec 16 12:59:53.745000 audit[5483]: CRED_ACQ pid=5483 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:53.751984 kernel: audit: type=1103 audit(1765889993.745:843): pid=5483 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:55.087000 audit[5493]: NETFILTER_CFG table=filter:145 family=2 entries=26 op=nft_register_rule pid=5493 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:59:55.096110 kernel: audit: type=1325 audit(1765889995.087:844): table=filter:145 family=2 entries=26 op=nft_register_rule pid=5493 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:59:55.087000 audit[5493]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe8dcfa1f0 a2=0 a3=7ffe8dcfa1dc items=0 ppid=3129 pid=5493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:55.106102 kernel: audit: type=1300 audit(1765889995.087:844): arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe8dcfa1f0 a2=0 a3=7ffe8dcfa1dc items=0 ppid=3129 pid=5493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:55.087000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:59:55.110990 kernel: audit: type=1327 audit(1765889995.087:844): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:59:55.097000 audit[5493]: NETFILTER_CFG table=nat:146 family=2 entries=20 op=nft_register_rule pid=5493 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:59:55.097000 audit[5493]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe8dcfa1f0 a2=0 a3=0 items=0 ppid=3129 pid=5493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:55.097000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:59:55.140000 audit[5495]: NETFILTER_CFG table=filter:147 family=2 entries=38 op=nft_register_rule pid=5495 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:59:55.140000 audit[5495]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fffc9b80020 a2=0 a3=7fffc9b8000c items=0 ppid=3129 pid=5495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:55.140000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:59:55.146000 audit[5495]: NETFILTER_CFG table=nat:148 family=2 entries=20 op=nft_register_rule pid=5495 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:59:55.146000 audit[5495]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffc9b80020 a2=0 a3=0 items=0 ppid=3129 pid=5495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:55.146000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:59:55.273913 sshd[5483]: Connection closed by 139.178.68.195 port 54622 Dec 16 12:59:55.275591 sshd-session[5479]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:55.280000 audit[5479]: USER_END pid=5479 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:55.280000 audit[5479]: CRED_DISP pid=5479 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:55.286739 systemd[1]: sshd@19-10.244.27.222:22-139.178.68.195:54622.service: Deactivated successfully. Dec 16 12:59:55.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.244.27.222:22-139.178.68.195:54622 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:55.290515 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 12:59:55.294093 systemd-logind[1597]: Session 23 logged out. Waiting for processes to exit. Dec 16 12:59:55.295945 systemd-logind[1597]: Removed session 23. Dec 16 12:59:55.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.244.27.222:22-139.178.68.195:54634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:55.452380 systemd[1]: Started sshd@20-10.244.27.222:22-139.178.68.195:54634.service - OpenSSH per-connection server daemon (139.178.68.195:54634). Dec 16 12:59:56.390938 sshd[5500]: Accepted publickey for core from 139.178.68.195 port 54634 ssh2: RSA SHA256:nzW4nX+OoWczWkGdWpN6K+WV2VgcBQaSLng08cWkYS4 Dec 16 12:59:56.390000 audit[5500]: USER_ACCT pid=5500 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:56.391000 audit[5500]: CRED_ACQ pid=5500 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:56.391000 audit[5500]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd9d8d93a0 a2=3 a3=0 items=0 ppid=1 pid=5500 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:56.391000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:56.393786 sshd-session[5500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:56.402475 systemd-logind[1597]: New session 24 of user core. Dec 16 12:59:56.413284 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 12:59:56.420000 audit[5500]: USER_START pid=5500 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:56.424000 audit[5504]: CRED_ACQ pid=5504 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:56.734128 kubelet[2968]: E1216 12:59:56.733504 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6f4459b6-4w99m" podUID="f26c7895-de48-48b9-98b3-5ed0a263683c" Dec 16 12:59:56.736317 kubelet[2968]: E1216 12:59:56.735438 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hjfvx" podUID="5f28fd1d-daa3-4b1a-9808-93af3076e192" Dec 16 12:59:57.354263 sshd[5504]: Connection closed by 139.178.68.195 port 54634 Dec 16 12:59:57.354886 sshd-session[5500]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:57.358000 audit[5500]: USER_END pid=5500 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:57.358000 audit[5500]: CRED_DISP pid=5500 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:57.364297 systemd[1]: sshd@20-10.244.27.222:22-139.178.68.195:54634.service: Deactivated successfully. Dec 16 12:59:57.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.244.27.222:22-139.178.68.195:54634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:57.368825 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 12:59:57.371479 systemd-logind[1597]: Session 24 logged out. Waiting for processes to exit. Dec 16 12:59:57.374445 systemd-logind[1597]: Removed session 24. Dec 16 12:59:57.532495 systemd[1]: Started sshd@21-10.244.27.222:22-139.178.68.195:54648.service - OpenSSH per-connection server daemon (139.178.68.195:54648). Dec 16 12:59:57.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.244.27.222:22-139.178.68.195:54648 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:58.419000 audit[5534]: USER_ACCT pid=5534 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:58.421250 sshd[5534]: Accepted publickey for core from 139.178.68.195 port 54648 ssh2: RSA SHA256:nzW4nX+OoWczWkGdWpN6K+WV2VgcBQaSLng08cWkYS4 Dec 16 12:59:58.421000 audit[5534]: CRED_ACQ pid=5534 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:58.421000 audit[5534]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca1c3eae0 a2=3 a3=0 items=0 ppid=1 pid=5534 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:58.421000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:58.423741 sshd-session[5534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:58.431773 systemd-logind[1597]: New session 25 of user core. Dec 16 12:59:58.441216 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 12:59:58.447000 audit[5534]: USER_START pid=5534 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:58.451000 audit[5545]: CRED_ACQ pid=5545 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:58.730489 containerd[1627]: time="2025-12-16T12:59:58.730280881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:59:59.051918 sshd[5545]: Connection closed by 139.178.68.195 port 54648 Dec 16 12:59:59.052819 sshd-session[5534]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:59.054000 audit[5534]: USER_END pid=5534 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:59.065085 kernel: kauditd_printk_skb: 31 callbacks suppressed Dec 16 12:59:59.065201 kernel: audit: type=1106 audit(1765889999.054:866): pid=5534 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:59.072486 containerd[1627]: time="2025-12-16T12:59:59.072245621Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:59.061000 audit[5534]: CRED_DISP pid=5534 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:59.074284 kernel: audit: type=1104 audit(1765889999.061:867): pid=5534 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 12:59:59.077419 containerd[1627]: time="2025-12-16T12:59:59.075189497Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:59:59.077419 containerd[1627]: time="2025-12-16T12:59:59.075314565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:59:59.077636 kubelet[2968]: E1216 12:59:59.075526 2968 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:59:59.077636 kubelet[2968]: E1216 12:59:59.075630 2968 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:59:59.078208 systemd[1]: sshd@21-10.244.27.222:22-139.178.68.195:54648.service: Deactivated successfully. Dec 16 12:59:59.080111 kubelet[2968]: E1216 12:59:59.075928 2968 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9zzkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7fb664895f-z7td5_calico-system(654be68d-8474-4290-8738-6c95ee33b1c3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:59.081334 kubelet[2968]: E1216 12:59:59.081276 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fb664895f-z7td5" podUID="654be68d-8474-4290-8738-6c95ee33b1c3" Dec 16 12:59:59.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.244.27.222:22-139.178.68.195:54648 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:59.086987 kernel: audit: type=1131 audit(1765889999.078:868): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.244.27.222:22-139.178.68.195:54648 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:59.087649 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 12:59:59.091262 systemd-logind[1597]: Session 25 logged out. Waiting for processes to exit. Dec 16 12:59:59.092821 systemd-logind[1597]: Removed session 25. Dec 16 13:00:03.756544 kubelet[2968]: E1216 13:00:03.756338 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-frm2n" podUID="ae2d98cb-e462-4622-a2ef-d1063c3df86a" Dec 16 13:00:03.766452 containerd[1627]: time="2025-12-16T13:00:03.766400099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:00:04.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.244.27.222:22-139.178.68.195:49634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:00:04.211376 systemd[1]: Started sshd@22-10.244.27.222:22-139.178.68.195:49634.service - OpenSSH per-connection server daemon (139.178.68.195:49634). Dec 16 13:00:04.220043 kernel: audit: type=1130 audit(1765890004.210:869): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.244.27.222:22-139.178.68.195:49634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:00:04.320000 audit[5583]: NETFILTER_CFG table=filter:149 family=2 entries=26 op=nft_register_rule pid=5583 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:00:04.326005 kernel: audit: type=1325 audit(1765890004.320:870): table=filter:149 family=2 entries=26 op=nft_register_rule pid=5583 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:00:04.320000 audit[5583]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd3f2ddd80 a2=0 a3=7ffd3f2ddd6c items=0 ppid=3129 pid=5583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:04.333984 kernel: audit: type=1300 audit(1765890004.320:870): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd3f2ddd80 a2=0 a3=7ffd3f2ddd6c items=0 ppid=3129 pid=5583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:04.320000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:00:04.338010 kernel: audit: type=1327 audit(1765890004.320:870): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:00:04.345000 audit[5583]: NETFILTER_CFG table=nat:150 family=2 entries=104 op=nft_register_chain pid=5583 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:00:04.354624 kernel: audit: type=1325 audit(1765890004.345:871): table=nat:150 family=2 entries=104 op=nft_register_chain pid=5583 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:00:04.354798 kernel: audit: type=1300 audit(1765890004.345:871): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd3f2ddd80 a2=0 a3=7ffd3f2ddd6c items=0 ppid=3129 pid=5583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:04.345000 audit[5583]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd3f2ddd80 a2=0 a3=7ffd3f2ddd6c items=0 ppid=3129 pid=5583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:04.345000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:00:04.362981 kernel: audit: type=1327 audit(1765890004.345:871): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:00:04.871229 containerd[1627]: time="2025-12-16T13:00:04.871159790Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:00:04.873069 containerd[1627]: time="2025-12-16T13:00:04.873013804Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:00:04.873171 containerd[1627]: time="2025-12-16T13:00:04.873126890Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 13:00:04.875222 kubelet[2968]: E1216 13:00:04.875164 2968 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:00:04.875682 kubelet[2968]: E1216 13:00:04.875241 2968 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:00:04.875682 kubelet[2968]: E1216 13:00:04.875451 2968 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3b832fcc79f24e90b591efebd74820fd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lcxrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7bfdd67657-r2jq9_calico-system(fc47de84-c877-49ec-9b26-5c23204d879d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:00:04.879160 containerd[1627]: time="2025-12-16T13:00:04.879114599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:00:05.062000 audit[5579]: USER_ACCT pid=5579 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 13:00:05.066246 sshd[5579]: Accepted publickey for core from 139.178.68.195 port 49634 ssh2: RSA SHA256:nzW4nX+OoWczWkGdWpN6K+WV2VgcBQaSLng08cWkYS4 Dec 16 13:00:05.069980 kernel: audit: type=1101 audit(1765890005.062:872): pid=5579 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 13:00:05.070000 audit[5579]: CRED_ACQ pid=5579 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 13:00:05.076998 kernel: audit: type=1103 audit(1765890005.070:873): pid=5579 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 13:00:05.078037 sshd-session[5579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:00:05.075000 audit[5579]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4e575c90 a2=3 a3=0 items=0 ppid=1 pid=5579 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:05.075000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:00:05.082154 kernel: audit: type=1006 audit(1765890005.075:874): pid=5579 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 16 13:00:05.091530 systemd-logind[1597]: New session 26 of user core. Dec 16 13:00:05.099817 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 13:00:05.106000 audit[5579]: USER_START pid=5579 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 13:00:05.110000 audit[5585]: CRED_ACQ pid=5585 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 13:00:05.731897 sshd[5585]: Connection closed by 139.178.68.195 port 49634 Dec 16 13:00:05.732743 sshd-session[5579]: pam_unix(sshd:session): session closed for user core Dec 16 13:00:05.738000 audit[5579]: USER_END pid=5579 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 13:00:05.739000 audit[5579]: CRED_DISP pid=5579 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 13:00:05.746516 systemd-logind[1597]: Session 26 logged out. Waiting for processes to exit. Dec 16 13:00:05.747637 systemd[1]: sshd@22-10.244.27.222:22-139.178.68.195:49634.service: Deactivated successfully. Dec 16 13:00:05.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.244.27.222:22-139.178.68.195:49634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:00:05.753134 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 13:00:05.756877 systemd-logind[1597]: Removed session 26. Dec 16 13:00:05.872410 containerd[1627]: time="2025-12-16T13:00:05.872161456Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:00:05.886314 containerd[1627]: time="2025-12-16T13:00:05.886111533Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:00:05.886314 containerd[1627]: time="2025-12-16T13:00:05.886259386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 13:00:05.886548 kubelet[2968]: E1216 13:00:05.886455 2968 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:00:05.886548 kubelet[2968]: E1216 13:00:05.886533 2968 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:00:05.887981 kubelet[2968]: E1216 13:00:05.886806 2968 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lcxrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7bfdd67657-r2jq9_calico-system(fc47de84-c877-49ec-9b26-5c23204d879d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:00:05.888413 kubelet[2968]: E1216 13:00:05.888306 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7bfdd67657-r2jq9" podUID="fc47de84-c877-49ec-9b26-5c23204d879d" Dec 16 13:00:05.888691 containerd[1627]: time="2025-12-16T13:00:05.888358438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:00:06.847023 containerd[1627]: time="2025-12-16T13:00:06.846887929Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:00:06.848766 containerd[1627]: time="2025-12-16T13:00:06.848685004Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:00:06.848855 containerd[1627]: time="2025-12-16T13:00:06.848800365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:00:06.849176 kubelet[2968]: E1216 13:00:06.849104 2968 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:00:06.849410 kubelet[2968]: E1216 13:00:06.849180 2968 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:00:06.849618 kubelet[2968]: E1216 13:00:06.849417 2968 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqwlc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c6f4459b6-wgcgz_calico-apiserver(612befca-b93d-4468-b0c5-1d17cad065aa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:00:06.850702 kubelet[2968]: E1216 13:00:06.850664 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6f4459b6-wgcgz" podUID="612befca-b93d-4468-b0c5-1d17cad065aa" Dec 16 13:00:09.732856 containerd[1627]: time="2025-12-16T13:00:09.732788917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:00:10.448702 containerd[1627]: time="2025-12-16T13:00:10.448629519Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:00:10.452408 containerd[1627]: time="2025-12-16T13:00:10.451788502Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 13:00:10.452408 containerd[1627]: time="2025-12-16T13:00:10.451867749Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:00:10.453066 kubelet[2968]: E1216 13:00:10.453010 2968 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:00:10.453571 kubelet[2968]: E1216 13:00:10.453082 2968 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:00:10.453571 kubelet[2968]: E1216 13:00:10.453291 2968 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8d696,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-hjfvx_calico-system(5f28fd1d-daa3-4b1a-9808-93af3076e192): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:00:10.455017 kubelet[2968]: E1216 13:00:10.454944 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hjfvx" podUID="5f28fd1d-daa3-4b1a-9808-93af3076e192" Dec 16 13:00:10.943071 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 13:00:10.943361 kernel: audit: type=1130 audit(1765890010.922:880): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.244.27.222:22-139.178.68.195:48646 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:00:10.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.244.27.222:22-139.178.68.195:48646 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:00:10.923668 systemd[1]: Started sshd@23-10.244.27.222:22-139.178.68.195:48646.service - OpenSSH per-connection server daemon (139.178.68.195:48646). Dec 16 13:00:11.735155 containerd[1627]: time="2025-12-16T13:00:11.735096848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:00:11.897000 audit[5599]: USER_ACCT pid=5599 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 13:00:11.907807 sshd[5599]: Accepted publickey for core from 139.178.68.195 port 48646 ssh2: RSA SHA256:nzW4nX+OoWczWkGdWpN6K+WV2VgcBQaSLng08cWkYS4 Dec 16 13:00:11.908875 kernel: audit: type=1101 audit(1765890011.897:881): pid=5599 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 13:00:11.911824 sshd-session[5599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:00:11.908000 audit[5599]: CRED_ACQ pid=5599 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 13:00:11.921425 kernel: audit: type=1103 audit(1765890011.908:882): pid=5599 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 13:00:11.921509 kernel: audit: type=1006 audit(1765890011.909:883): pid=5599 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Dec 16 13:00:11.909000 audit[5599]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff132e3240 a2=3 a3=0 items=0 ppid=1 pid=5599 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:11.925743 kernel: audit: type=1300 audit(1765890011.909:883): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff132e3240 a2=3 a3=0 items=0 ppid=1 pid=5599 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:11.909000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:00:11.933985 kernel: audit: type=1327 audit(1765890011.909:883): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:00:11.934870 systemd-logind[1597]: New session 27 of user core. Dec 16 13:00:11.943459 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 13:00:11.948000 audit[5599]: USER_START pid=5599 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 13:00:11.955000 audit[5603]: CRED_ACQ pid=5603 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 13:00:11.958371 kernel: audit: type=1105 audit(1765890011.948:884): pid=5599 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 13:00:11.958467 kernel: audit: type=1103 audit(1765890011.955:885): pid=5603 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 13:00:12.080981 containerd[1627]: time="2025-12-16T13:00:12.080850483Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:00:12.082506 containerd[1627]: time="2025-12-16T13:00:12.082443846Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:00:12.082574 containerd[1627]: time="2025-12-16T13:00:12.082552768Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:00:12.082877 kubelet[2968]: E1216 13:00:12.082803 2968 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:00:12.083374 kubelet[2968]: E1216 13:00:12.082877 2968 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:00:12.083374 kubelet[2968]: E1216 13:00:12.083151 2968 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nzmwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c6f4459b6-4w99m_calico-apiserver(f26c7895-de48-48b9-98b3-5ed0a263683c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:00:12.085167 kubelet[2968]: E1216 13:00:12.085101 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6f4459b6-4w99m" podUID="f26c7895-de48-48b9-98b3-5ed0a263683c" Dec 16 13:00:12.580703 sshd[5603]: Connection closed by 139.178.68.195 port 48646 Dec 16 13:00:12.581708 sshd-session[5599]: pam_unix(sshd:session): session closed for user core Dec 16 13:00:12.584000 audit[5599]: USER_END pid=5599 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 13:00:12.597075 kernel: audit: type=1106 audit(1765890012.584:886): pid=5599 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 13:00:12.597187 kernel: audit: type=1104 audit(1765890012.584:887): pid=5599 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 13:00:12.584000 audit[5599]: CRED_DISP pid=5599 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 13:00:12.596167 systemd[1]: sshd@23-10.244.27.222:22-139.178.68.195:48646.service: Deactivated successfully. Dec 16 13:00:12.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.244.27.222:22-139.178.68.195:48646 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:00:12.601809 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 13:00:12.606585 systemd-logind[1597]: Session 27 logged out. Waiting for processes to exit. Dec 16 13:00:12.609184 systemd-logind[1597]: Removed session 27. Dec 16 13:00:13.738085 kubelet[2968]: E1216 13:00:13.738023 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fb664895f-z7td5" podUID="654be68d-8474-4290-8738-6c95ee33b1c3" Dec 16 13:00:14.732910 kubelet[2968]: E1216 13:00:14.732811 2968 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-frm2n" podUID="ae2d98cb-e462-4622-a2ef-d1063c3df86a"